1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or https://opensource.org/licenses/CDDL-1.0. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 23 * Copyright (c) 2018, Joyent, Inc. 24 * Copyright (c) 2011, 2020, Delphix. All rights reserved. 25 * Copyright (c) 2014, Saso Kiselkov. All rights reserved. 26 * Copyright (c) 2017, Nexenta Systems, Inc. All rights reserved. 27 * Copyright (c) 2019, loli10K <ezomori.nozomu@gmail.com>. All rights reserved. 28 * Copyright (c) 2020, George Amanakis. All rights reserved. 29 * Copyright (c) 2019, Klara Inc. 30 * Copyright (c) 2019, Allan Jude 31 * Copyright (c) 2020, The FreeBSD Foundation [1] 32 * 33 * [1] Portions of this software were developed by Allan Jude 34 * under sponsorship from the FreeBSD Foundation. 35 */ 36 37 /* 38 * DVA-based Adjustable Replacement Cache 39 * 40 * While much of the theory of operation used here is 41 * based on the self-tuning, low overhead replacement cache 42 * presented by Megiddo and Modha at FAST 2003, there are some 43 * significant differences: 44 * 45 * 1. The Megiddo and Modha model assumes any page is evictable. 46 * Pages in its cache cannot be "locked" into memory. This makes 47 * the eviction algorithm simple: evict the last page in the list. 48 * This also make the performance characteristics easy to reason 49 * about. Our cache is not so simple. At any given moment, some 50 * subset of the blocks in the cache are un-evictable because we 51 * have handed out a reference to them. Blocks are only evictable 52 * when there are no external references active. This makes 53 * eviction far more problematic: we choose to evict the evictable 54 * blocks that are the "lowest" in the list. 55 * 56 * There are times when it is not possible to evict the requested 57 * space. In these circumstances we are unable to adjust the cache 58 * size. To prevent the cache growing unbounded at these times we 59 * implement a "cache throttle" that slows the flow of new data 60 * into the cache until we can make space available. 61 * 62 * 2. The Megiddo and Modha model assumes a fixed cache size. 63 * Pages are evicted when the cache is full and there is a cache 64 * miss. Our model has a variable sized cache. It grows with 65 * high use, but also tries to react to memory pressure from the 66 * operating system: decreasing its size when system memory is 67 * tight. 68 * 69 * 3. The Megiddo and Modha model assumes a fixed page size. All 70 * elements of the cache are therefore exactly the same size. So 71 * when adjusting the cache size following a cache miss, its simply 72 * a matter of choosing a single page to evict. In our model, we 73 * have variable sized cache blocks (ranging from 512 bytes to 74 * 128K bytes). We therefore choose a set of blocks to evict to make 75 * space for a cache miss that approximates as closely as possible 76 * the space used by the new block. 77 * 78 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache" 79 * by N. Megiddo & D. Modha, FAST 2003 80 */ 81 82 /* 83 * The locking model: 84 * 85 * A new reference to a cache buffer can be obtained in two 86 * ways: 1) via a hash table lookup using the DVA as a key, 87 * or 2) via one of the ARC lists. The arc_read() interface 88 * uses method 1, while the internal ARC algorithms for 89 * adjusting the cache use method 2. We therefore provide two 90 * types of locks: 1) the hash table lock array, and 2) the 91 * ARC list locks. 92 * 93 * Buffers do not have their own mutexes, rather they rely on the 94 * hash table mutexes for the bulk of their protection (i.e. most 95 * fields in the arc_buf_hdr_t are protected by these mutexes). 96 * 97 * buf_hash_find() returns the appropriate mutex (held) when it 98 * locates the requested buffer in the hash table. It returns 99 * NULL for the mutex if the buffer was not in the table. 100 * 101 * buf_hash_remove() expects the appropriate hash mutex to be 102 * already held before it is invoked. 103 * 104 * Each ARC state also has a mutex which is used to protect the 105 * buffer list associated with the state. When attempting to 106 * obtain a hash table lock while holding an ARC list lock you 107 * must use: mutex_tryenter() to avoid deadlock. Also note that 108 * the active state mutex must be held before the ghost state mutex. 109 * 110 * It as also possible to register a callback which is run when the 111 * metadata limit is reached and no buffers can be safely evicted. In 112 * this case the arc user should drop a reference on some arc buffers so 113 * they can be reclaimed. For example, when using the ZPL each dentry 114 * holds a references on a znode. These dentries must be pruned before 115 * the arc buffer holding the znode can be safely evicted. 116 * 117 * Note that the majority of the performance stats are manipulated 118 * with atomic operations. 119 * 120 * The L2ARC uses the l2ad_mtx on each vdev for the following: 121 * 122 * - L2ARC buflist creation 123 * - L2ARC buflist eviction 124 * - L2ARC write completion, which walks L2ARC buflists 125 * - ARC header destruction, as it removes from L2ARC buflists 126 * - ARC header release, as it removes from L2ARC buflists 127 */ 128 129 /* 130 * ARC operation: 131 * 132 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure. 133 * This structure can point either to a block that is still in the cache or to 134 * one that is only accessible in an L2 ARC device, or it can provide 135 * information about a block that was recently evicted. If a block is 136 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough 137 * information to retrieve it from the L2ARC device. This information is 138 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block 139 * that is in this state cannot access the data directly. 140 * 141 * Blocks that are actively being referenced or have not been evicted 142 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within 143 * the arc_buf_hdr_t that will point to the data block in memory. A block can 144 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC 145 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and 146 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd). 147 * 148 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the 149 * ability to store the physical data (b_pabd) associated with the DVA of the 150 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block, 151 * it will match its on-disk compression characteristics. This behavior can be 152 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the 153 * compressed ARC functionality is disabled, the b_pabd will point to an 154 * uncompressed version of the on-disk data. 155 * 156 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each 157 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it. 158 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC 159 * consumer. The ARC will provide references to this data and will keep it 160 * cached until it is no longer in use. The ARC caches only the L1ARC's physical 161 * data block and will evict any arc_buf_t that is no longer referenced. The 162 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the 163 * "overhead_size" kstat. 164 * 165 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or 166 * compressed form. The typical case is that consumers will want uncompressed 167 * data, and when that happens a new data buffer is allocated where the data is 168 * decompressed for them to use. Currently the only consumer who wants 169 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it 170 * exists on disk. When this happens, the arc_buf_t's data buffer is shared 171 * with the arc_buf_hdr_t. 172 * 173 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The 174 * first one is owned by a compressed send consumer (and therefore references 175 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be 176 * used by any other consumer (and has its own uncompressed copy of the data 177 * buffer). 178 * 179 * arc_buf_hdr_t 180 * +-----------+ 181 * | fields | 182 * | common to | 183 * | L1- and | 184 * | L2ARC | 185 * +-----------+ 186 * | l2arc_buf_hdr_t 187 * | | 188 * +-----------+ 189 * | l1arc_buf_hdr_t 190 * | | arc_buf_t 191 * | b_buf +------------>+-----------+ arc_buf_t 192 * | b_pabd +-+ |b_next +---->+-----------+ 193 * +-----------+ | |-----------| |b_next +-->NULL 194 * | |b_comp = T | +-----------+ 195 * | |b_data +-+ |b_comp = F | 196 * | +-----------+ | |b_data +-+ 197 * +->+------+ | +-----------+ | 198 * compressed | | | | 199 * data | |<--------------+ | uncompressed 200 * +------+ compressed, | data 201 * shared +-->+------+ 202 * data | | 203 * | | 204 * +------+ 205 * 206 * When a consumer reads a block, the ARC must first look to see if the 207 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new 208 * arc_buf_t and either copies uncompressed data into a new data buffer from an 209 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a 210 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the 211 * hdr is compressed and the desired compression characteristics of the 212 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the 213 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be 214 * the last buffer in the hdr's b_buf list, however a shared compressed buf can 215 * be anywhere in the hdr's list. 216 * 217 * The diagram below shows an example of an uncompressed ARC hdr that is 218 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is 219 * the last element in the buf list): 220 * 221 * arc_buf_hdr_t 222 * +-----------+ 223 * | | 224 * | | 225 * | | 226 * +-----------+ 227 * l2arc_buf_hdr_t| | 228 * | | 229 * +-----------+ 230 * l1arc_buf_hdr_t| | 231 * | | arc_buf_t (shared) 232 * | b_buf +------------>+---------+ arc_buf_t 233 * | | |b_next +---->+---------+ 234 * | b_pabd +-+ |---------| |b_next +-->NULL 235 * +-----------+ | | | +---------+ 236 * | |b_data +-+ | | 237 * | +---------+ | |b_data +-+ 238 * +->+------+ | +---------+ | 239 * | | | | 240 * uncompressed | | | | 241 * data +------+ | | 242 * ^ +->+------+ | 243 * | uncompressed | | | 244 * | data | | | 245 * | +------+ | 246 * +---------------------------------+ 247 * 248 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd 249 * since the physical block is about to be rewritten. The new data contents 250 * will be contained in the arc_buf_t. As the I/O pipeline performs the write, 251 * it may compress the data before writing it to disk. The ARC will be called 252 * with the transformed data and will memcpy the transformed on-disk block into 253 * a newly allocated b_pabd. Writes are always done into buffers which have 254 * either been loaned (and hence are new and don't have other readers) or 255 * buffers which have been released (and hence have their own hdr, if there 256 * were originally other readers of the buf's original hdr). This ensures that 257 * the ARC only needs to update a single buf and its hdr after a write occurs. 258 * 259 * When the L2ARC is in use, it will also take advantage of the b_pabd. The 260 * L2ARC will always write the contents of b_pabd to the L2ARC. This means 261 * that when compressed ARC is enabled that the L2ARC blocks are identical 262 * to the on-disk block in the main data pool. This provides a significant 263 * advantage since the ARC can leverage the bp's checksum when reading from the 264 * L2ARC to determine if the contents are valid. However, if the compressed 265 * ARC is disabled, then the L2ARC's block must be transformed to look 266 * like the physical block in the main data pool before comparing the 267 * checksum and determining its validity. 268 * 269 * The L1ARC has a slightly different system for storing encrypted data. 270 * Raw (encrypted + possibly compressed) data has a few subtle differences from 271 * data that is just compressed. The biggest difference is that it is not 272 * possible to decrypt encrypted data (or vice-versa) if the keys aren't loaded. 273 * The other difference is that encryption cannot be treated as a suggestion. 274 * If a caller would prefer compressed data, but they actually wind up with 275 * uncompressed data the worst thing that could happen is there might be a 276 * performance hit. If the caller requests encrypted data, however, we must be 277 * sure they actually get it or else secret information could be leaked. Raw 278 * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore, 279 * may have both an encrypted version and a decrypted version of its data at 280 * once. When a caller needs a raw arc_buf_t, it is allocated and the data is 281 * copied out of this header. To avoid complications with b_pabd, raw buffers 282 * cannot be shared. 283 */ 284 285 #include <sys/spa.h> 286 #include <sys/zio.h> 287 #include <sys/spa_impl.h> 288 #include <sys/zio_compress.h> 289 #include <sys/zio_checksum.h> 290 #include <sys/zfs_context.h> 291 #include <sys/arc.h> 292 #include <sys/zfs_refcount.h> 293 #include <sys/vdev.h> 294 #include <sys/vdev_impl.h> 295 #include <sys/dsl_pool.h> 296 #include <sys/multilist.h> 297 #include <sys/abd.h> 298 #include <sys/zil.h> 299 #include <sys/fm/fs/zfs.h> 300 #include <sys/callb.h> 301 #include <sys/kstat.h> 302 #include <sys/zthr.h> 303 #include <zfs_fletcher.h> 304 #include <sys/arc_impl.h> 305 #include <sys/trace_zfs.h> 306 #include <sys/aggsum.h> 307 #include <sys/wmsum.h> 308 #include <cityhash.h> 309 #include <sys/vdev_trim.h> 310 #include <sys/zfs_racct.h> 311 #include <sys/zstd/zstd.h> 312 313 #ifndef _KERNEL 314 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */ 315 boolean_t arc_watch = B_FALSE; 316 #endif 317 318 /* 319 * This thread's job is to keep enough free memory in the system, by 320 * calling arc_kmem_reap_soon() plus arc_reduce_target_size(), which improves 321 * arc_available_memory(). 322 */ 323 static zthr_t *arc_reap_zthr; 324 325 /* 326 * This thread's job is to keep arc_size under arc_c, by calling 327 * arc_evict(), which improves arc_is_overflowing(). 328 */ 329 static zthr_t *arc_evict_zthr; 330 static arc_buf_hdr_t **arc_state_evict_markers; 331 static int arc_state_evict_marker_count; 332 333 static kmutex_t arc_evict_lock; 334 static boolean_t arc_evict_needed = B_FALSE; 335 static clock_t arc_last_uncached_flush; 336 337 /* 338 * Count of bytes evicted since boot. 339 */ 340 static uint64_t arc_evict_count; 341 342 /* 343 * List of arc_evict_waiter_t's, representing threads waiting for the 344 * arc_evict_count to reach specific values. 345 */ 346 static list_t arc_evict_waiters; 347 348 /* 349 * When arc_is_overflowing(), arc_get_data_impl() waits for this percent of 350 * the requested amount of data to be evicted. For example, by default for 351 * every 2KB that's evicted, 1KB of it may be "reused" by a new allocation. 352 * Since this is above 100%, it ensures that progress is made towards getting 353 * arc_size under arc_c. Since this is finite, it ensures that allocations 354 * can still happen, even during the potentially long time that arc_size is 355 * more than arc_c. 356 */ 357 static uint_t zfs_arc_eviction_pct = 200; 358 359 /* 360 * The number of headers to evict in arc_evict_state_impl() before 361 * dropping the sublist lock and evicting from another sublist. A lower 362 * value means we're more likely to evict the "correct" header (i.e. the 363 * oldest header in the arc state), but comes with higher overhead 364 * (i.e. more invocations of arc_evict_state_impl()). 365 */ 366 static uint_t zfs_arc_evict_batch_limit = 10; 367 368 /* number of seconds before growing cache again */ 369 uint_t arc_grow_retry = 5; 370 371 /* 372 * Minimum time between calls to arc_kmem_reap_soon(). 373 */ 374 static const int arc_kmem_cache_reap_retry_ms = 1000; 375 376 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */ 377 static int zfs_arc_overflow_shift = 8; 378 379 /* log2(fraction of arc to reclaim) */ 380 uint_t arc_shrink_shift = 7; 381 382 /* percent of pagecache to reclaim arc to */ 383 #ifdef _KERNEL 384 uint_t zfs_arc_pc_percent = 0; 385 #endif 386 387 /* 388 * log2(fraction of ARC which must be free to allow growing). 389 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory, 390 * when reading a new block into the ARC, we will evict an equal-sized block 391 * from the ARC. 392 * 393 * This must be less than arc_shrink_shift, so that when we shrink the ARC, 394 * we will still not allow it to grow. 395 */ 396 uint_t arc_no_grow_shift = 5; 397 398 399 /* 400 * minimum lifespan of a prefetch block in clock ticks 401 * (initialized in arc_init()) 402 */ 403 static uint_t arc_min_prefetch_ms; 404 static uint_t arc_min_prescient_prefetch_ms; 405 406 /* 407 * If this percent of memory is free, don't throttle. 408 */ 409 uint_t arc_lotsfree_percent = 10; 410 411 /* 412 * The arc has filled available memory and has now warmed up. 413 */ 414 boolean_t arc_warm; 415 416 /* 417 * These tunables are for performance analysis. 418 */ 419 uint64_t zfs_arc_max = 0; 420 uint64_t zfs_arc_min = 0; 421 static uint64_t zfs_arc_dnode_limit = 0; 422 static uint_t zfs_arc_dnode_reduce_percent = 10; 423 static uint_t zfs_arc_grow_retry = 0; 424 static uint_t zfs_arc_shrink_shift = 0; 425 uint_t zfs_arc_average_blocksize = 8 * 1024; /* 8KB */ 426 427 /* 428 * ARC dirty data constraints for arc_tempreserve_space() throttle: 429 * * total dirty data limit 430 * * anon block dirty limit 431 * * each pool's anon allowance 432 */ 433 static const unsigned long zfs_arc_dirty_limit_percent = 50; 434 static const unsigned long zfs_arc_anon_limit_percent = 25; 435 static const unsigned long zfs_arc_pool_dirty_percent = 20; 436 437 /* 438 * Enable or disable compressed arc buffers. 439 */ 440 int zfs_compressed_arc_enabled = B_TRUE; 441 442 /* 443 * Balance between metadata and data on ghost hits. Values above 100 444 * increase metadata caching by proportionally reducing effect of ghost 445 * data hits on target data/metadata rate. 446 */ 447 static uint_t zfs_arc_meta_balance = 500; 448 449 /* 450 * Percentage that can be consumed by dnodes of ARC meta buffers. 451 */ 452 static uint_t zfs_arc_dnode_limit_percent = 10; 453 454 /* 455 * These tunables are Linux-specific 456 */ 457 static uint64_t zfs_arc_sys_free = 0; 458 static uint_t zfs_arc_min_prefetch_ms = 0; 459 static uint_t zfs_arc_min_prescient_prefetch_ms = 0; 460 static uint_t zfs_arc_lotsfree_percent = 10; 461 462 /* 463 * Number of arc_prune threads 464 */ 465 static int zfs_arc_prune_task_threads = 1; 466 467 /* The 7 states: */ 468 arc_state_t ARC_anon; 469 arc_state_t ARC_mru; 470 arc_state_t ARC_mru_ghost; 471 arc_state_t ARC_mfu; 472 arc_state_t ARC_mfu_ghost; 473 arc_state_t ARC_l2c_only; 474 arc_state_t ARC_uncached; 475 476 arc_stats_t arc_stats = { 477 { "hits", KSTAT_DATA_UINT64 }, 478 { "iohits", KSTAT_DATA_UINT64 }, 479 { "misses", KSTAT_DATA_UINT64 }, 480 { "demand_data_hits", KSTAT_DATA_UINT64 }, 481 { "demand_data_iohits", KSTAT_DATA_UINT64 }, 482 { "demand_data_misses", KSTAT_DATA_UINT64 }, 483 { "demand_metadata_hits", KSTAT_DATA_UINT64 }, 484 { "demand_metadata_iohits", KSTAT_DATA_UINT64 }, 485 { "demand_metadata_misses", KSTAT_DATA_UINT64 }, 486 { "prefetch_data_hits", KSTAT_DATA_UINT64 }, 487 { "prefetch_data_iohits", KSTAT_DATA_UINT64 }, 488 { "prefetch_data_misses", KSTAT_DATA_UINT64 }, 489 { "prefetch_metadata_hits", KSTAT_DATA_UINT64 }, 490 { "prefetch_metadata_iohits", KSTAT_DATA_UINT64 }, 491 { "prefetch_metadata_misses", KSTAT_DATA_UINT64 }, 492 { "mru_hits", KSTAT_DATA_UINT64 }, 493 { "mru_ghost_hits", KSTAT_DATA_UINT64 }, 494 { "mfu_hits", KSTAT_DATA_UINT64 }, 495 { "mfu_ghost_hits", KSTAT_DATA_UINT64 }, 496 { "uncached_hits", KSTAT_DATA_UINT64 }, 497 { "deleted", KSTAT_DATA_UINT64 }, 498 { "mutex_miss", KSTAT_DATA_UINT64 }, 499 { "access_skip", KSTAT_DATA_UINT64 }, 500 { "evict_skip", KSTAT_DATA_UINT64 }, 501 { "evict_not_enough", KSTAT_DATA_UINT64 }, 502 { "evict_l2_cached", KSTAT_DATA_UINT64 }, 503 { "evict_l2_eligible", KSTAT_DATA_UINT64 }, 504 { "evict_l2_eligible_mfu", KSTAT_DATA_UINT64 }, 505 { "evict_l2_eligible_mru", KSTAT_DATA_UINT64 }, 506 { "evict_l2_ineligible", KSTAT_DATA_UINT64 }, 507 { "evict_l2_skip", KSTAT_DATA_UINT64 }, 508 { "hash_elements", KSTAT_DATA_UINT64 }, 509 { "hash_elements_max", KSTAT_DATA_UINT64 }, 510 { "hash_collisions", KSTAT_DATA_UINT64 }, 511 { "hash_chains", KSTAT_DATA_UINT64 }, 512 { "hash_chain_max", KSTAT_DATA_UINT64 }, 513 { "meta", KSTAT_DATA_UINT64 }, 514 { "pd", KSTAT_DATA_UINT64 }, 515 { "pm", KSTAT_DATA_UINT64 }, 516 { "c", KSTAT_DATA_UINT64 }, 517 { "c_min", KSTAT_DATA_UINT64 }, 518 { "c_max", KSTAT_DATA_UINT64 }, 519 { "size", KSTAT_DATA_UINT64 }, 520 { "compressed_size", KSTAT_DATA_UINT64 }, 521 { "uncompressed_size", KSTAT_DATA_UINT64 }, 522 { "overhead_size", KSTAT_DATA_UINT64 }, 523 { "hdr_size", KSTAT_DATA_UINT64 }, 524 { "data_size", KSTAT_DATA_UINT64 }, 525 { "metadata_size", KSTAT_DATA_UINT64 }, 526 { "dbuf_size", KSTAT_DATA_UINT64 }, 527 { "dnode_size", KSTAT_DATA_UINT64 }, 528 { "bonus_size", KSTAT_DATA_UINT64 }, 529 #if defined(COMPAT_FREEBSD11) 530 { "other_size", KSTAT_DATA_UINT64 }, 531 #endif 532 { "anon_size", KSTAT_DATA_UINT64 }, 533 { "anon_data", KSTAT_DATA_UINT64 }, 534 { "anon_metadata", KSTAT_DATA_UINT64 }, 535 { "anon_evictable_data", KSTAT_DATA_UINT64 }, 536 { "anon_evictable_metadata", KSTAT_DATA_UINT64 }, 537 { "mru_size", KSTAT_DATA_UINT64 }, 538 { "mru_data", KSTAT_DATA_UINT64 }, 539 { "mru_metadata", KSTAT_DATA_UINT64 }, 540 { "mru_evictable_data", KSTAT_DATA_UINT64 }, 541 { "mru_evictable_metadata", KSTAT_DATA_UINT64 }, 542 { "mru_ghost_size", KSTAT_DATA_UINT64 }, 543 { "mru_ghost_data", KSTAT_DATA_UINT64 }, 544 { "mru_ghost_metadata", KSTAT_DATA_UINT64 }, 545 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64 }, 546 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 547 { "mfu_size", KSTAT_DATA_UINT64 }, 548 { "mfu_data", KSTAT_DATA_UINT64 }, 549 { "mfu_metadata", KSTAT_DATA_UINT64 }, 550 { "mfu_evictable_data", KSTAT_DATA_UINT64 }, 551 { "mfu_evictable_metadata", KSTAT_DATA_UINT64 }, 552 { "mfu_ghost_size", KSTAT_DATA_UINT64 }, 553 { "mfu_ghost_data", KSTAT_DATA_UINT64 }, 554 { "mfu_ghost_metadata", KSTAT_DATA_UINT64 }, 555 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 }, 556 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 557 { "uncached_size", KSTAT_DATA_UINT64 }, 558 { "uncached_data", KSTAT_DATA_UINT64 }, 559 { "uncached_metadata", KSTAT_DATA_UINT64 }, 560 { "uncached_evictable_data", KSTAT_DATA_UINT64 }, 561 { "uncached_evictable_metadata", KSTAT_DATA_UINT64 }, 562 { "l2_hits", KSTAT_DATA_UINT64 }, 563 { "l2_misses", KSTAT_DATA_UINT64 }, 564 { "l2_prefetch_asize", KSTAT_DATA_UINT64 }, 565 { "l2_mru_asize", KSTAT_DATA_UINT64 }, 566 { "l2_mfu_asize", KSTAT_DATA_UINT64 }, 567 { "l2_bufc_data_asize", KSTAT_DATA_UINT64 }, 568 { "l2_bufc_metadata_asize", KSTAT_DATA_UINT64 }, 569 { "l2_feeds", KSTAT_DATA_UINT64 }, 570 { "l2_rw_clash", KSTAT_DATA_UINT64 }, 571 { "l2_read_bytes", KSTAT_DATA_UINT64 }, 572 { "l2_write_bytes", KSTAT_DATA_UINT64 }, 573 { "l2_writes_sent", KSTAT_DATA_UINT64 }, 574 { "l2_writes_done", KSTAT_DATA_UINT64 }, 575 { "l2_writes_error", KSTAT_DATA_UINT64 }, 576 { "l2_writes_lock_retry", KSTAT_DATA_UINT64 }, 577 { "l2_evict_lock_retry", KSTAT_DATA_UINT64 }, 578 { "l2_evict_reading", KSTAT_DATA_UINT64 }, 579 { "l2_evict_l1cached", KSTAT_DATA_UINT64 }, 580 { "l2_free_on_write", KSTAT_DATA_UINT64 }, 581 { "l2_abort_lowmem", KSTAT_DATA_UINT64 }, 582 { "l2_cksum_bad", KSTAT_DATA_UINT64 }, 583 { "l2_io_error", KSTAT_DATA_UINT64 }, 584 { "l2_size", KSTAT_DATA_UINT64 }, 585 { "l2_asize", KSTAT_DATA_UINT64 }, 586 { "l2_hdr_size", KSTAT_DATA_UINT64 }, 587 { "l2_log_blk_writes", KSTAT_DATA_UINT64 }, 588 { "l2_log_blk_avg_asize", KSTAT_DATA_UINT64 }, 589 { "l2_log_blk_asize", KSTAT_DATA_UINT64 }, 590 { "l2_log_blk_count", KSTAT_DATA_UINT64 }, 591 { "l2_data_to_meta_ratio", KSTAT_DATA_UINT64 }, 592 { "l2_rebuild_success", KSTAT_DATA_UINT64 }, 593 { "l2_rebuild_unsupported", KSTAT_DATA_UINT64 }, 594 { "l2_rebuild_io_errors", KSTAT_DATA_UINT64 }, 595 { "l2_rebuild_dh_errors", KSTAT_DATA_UINT64 }, 596 { "l2_rebuild_cksum_lb_errors", KSTAT_DATA_UINT64 }, 597 { "l2_rebuild_lowmem", KSTAT_DATA_UINT64 }, 598 { "l2_rebuild_size", KSTAT_DATA_UINT64 }, 599 { "l2_rebuild_asize", KSTAT_DATA_UINT64 }, 600 { "l2_rebuild_bufs", KSTAT_DATA_UINT64 }, 601 { "l2_rebuild_bufs_precached", KSTAT_DATA_UINT64 }, 602 { "l2_rebuild_log_blks", KSTAT_DATA_UINT64 }, 603 { "memory_throttle_count", KSTAT_DATA_UINT64 }, 604 { "memory_direct_count", KSTAT_DATA_UINT64 }, 605 { "memory_indirect_count", KSTAT_DATA_UINT64 }, 606 { "memory_all_bytes", KSTAT_DATA_UINT64 }, 607 { "memory_free_bytes", KSTAT_DATA_UINT64 }, 608 { "memory_available_bytes", KSTAT_DATA_INT64 }, 609 { "arc_no_grow", KSTAT_DATA_UINT64 }, 610 { "arc_tempreserve", KSTAT_DATA_UINT64 }, 611 { "arc_loaned_bytes", KSTAT_DATA_UINT64 }, 612 { "arc_prune", KSTAT_DATA_UINT64 }, 613 { "arc_meta_used", KSTAT_DATA_UINT64 }, 614 { "arc_dnode_limit", KSTAT_DATA_UINT64 }, 615 { "async_upgrade_sync", KSTAT_DATA_UINT64 }, 616 { "predictive_prefetch", KSTAT_DATA_UINT64 }, 617 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 }, 618 { "demand_iohit_predictive_prefetch", KSTAT_DATA_UINT64 }, 619 { "prescient_prefetch", KSTAT_DATA_UINT64 }, 620 { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64 }, 621 { "demand_iohit_prescient_prefetch", KSTAT_DATA_UINT64 }, 622 { "arc_need_free", KSTAT_DATA_UINT64 }, 623 { "arc_sys_free", KSTAT_DATA_UINT64 }, 624 { "arc_raw_size", KSTAT_DATA_UINT64 }, 625 { "cached_only_in_progress", KSTAT_DATA_UINT64 }, 626 { "abd_chunk_waste_size", KSTAT_DATA_UINT64 }, 627 }; 628 629 arc_sums_t arc_sums; 630 631 #define ARCSTAT_MAX(stat, val) { \ 632 uint64_t m; \ 633 while ((val) > (m = arc_stats.stat.value.ui64) && \ 634 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \ 635 continue; \ 636 } 637 638 /* 639 * We define a macro to allow ARC hits/misses to be easily broken down by 640 * two separate conditions, giving a total of four different subtypes for 641 * each of hits and misses (so eight statistics total). 642 */ 643 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \ 644 if (cond1) { \ 645 if (cond2) { \ 646 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \ 647 } else { \ 648 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \ 649 } \ 650 } else { \ 651 if (cond2) { \ 652 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \ 653 } else { \ 654 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\ 655 } \ 656 } 657 658 /* 659 * This macro allows us to use kstats as floating averages. Each time we 660 * update this kstat, we first factor it and the update value by 661 * ARCSTAT_AVG_FACTOR to shrink the new value's contribution to the overall 662 * average. This macro assumes that integer loads and stores are atomic, but 663 * is not safe for multiple writers updating the kstat in parallel (only the 664 * last writer's update will remain). 665 */ 666 #define ARCSTAT_F_AVG_FACTOR 3 667 #define ARCSTAT_F_AVG(stat, value) \ 668 do { \ 669 uint64_t x = ARCSTAT(stat); \ 670 x = x - x / ARCSTAT_F_AVG_FACTOR + \ 671 (value) / ARCSTAT_F_AVG_FACTOR; \ 672 ARCSTAT(stat) = x; \ 673 } while (0) 674 675 static kstat_t *arc_ksp; 676 677 /* 678 * There are several ARC variables that are critical to export as kstats -- 679 * but we don't want to have to grovel around in the kstat whenever we wish to 680 * manipulate them. For these variables, we therefore define them to be in 681 * terms of the statistic variable. This assures that we are not introducing 682 * the possibility of inconsistency by having shadow copies of the variables, 683 * while still allowing the code to be readable. 684 */ 685 #define arc_tempreserve ARCSTAT(arcstat_tempreserve) 686 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes) 687 #define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */ 688 #define arc_need_free ARCSTAT(arcstat_need_free) /* waiting to be evicted */ 689 690 hrtime_t arc_growtime; 691 list_t arc_prune_list; 692 kmutex_t arc_prune_mtx; 693 taskq_t *arc_prune_taskq; 694 695 #define GHOST_STATE(state) \ 696 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \ 697 (state) == arc_l2c_only) 698 699 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE) 700 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) 701 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR) 702 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH) 703 #define HDR_PRESCIENT_PREFETCH(hdr) \ 704 ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) 705 #define HDR_COMPRESSION_ENABLED(hdr) \ 706 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC) 707 708 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE) 709 #define HDR_UNCACHED(hdr) ((hdr)->b_flags & ARC_FLAG_UNCACHED) 710 #define HDR_L2_READING(hdr) \ 711 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \ 712 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)) 713 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING) 714 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED) 715 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD) 716 #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED) 717 #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH) 718 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA) 719 720 #define HDR_ISTYPE_METADATA(hdr) \ 721 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA) 722 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr)) 723 724 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR) 725 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR) 726 #define HDR_HAS_RABD(hdr) \ 727 (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \ 728 (hdr)->b_crypt_hdr.b_rabd != NULL) 729 #define HDR_ENCRYPTED(hdr) \ 730 (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 731 #define HDR_AUTHENTICATED(hdr) \ 732 (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 733 734 /* For storing compression mode in b_flags */ 735 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1) 736 737 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \ 738 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS)) 739 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \ 740 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp)); 741 742 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL) 743 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED) 744 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED) 745 #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED) 746 747 /* 748 * Other sizes 749 */ 750 751 #define HDR_FULL_SIZE ((int64_t)sizeof (arc_buf_hdr_t)) 752 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr)) 753 754 /* 755 * Hash table routines 756 */ 757 758 #define BUF_LOCKS 2048 759 typedef struct buf_hash_table { 760 uint64_t ht_mask; 761 arc_buf_hdr_t **ht_table; 762 kmutex_t ht_locks[BUF_LOCKS] ____cacheline_aligned; 763 } buf_hash_table_t; 764 765 static buf_hash_table_t buf_hash_table; 766 767 #define BUF_HASH_INDEX(spa, dva, birth) \ 768 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask) 769 #define BUF_HASH_LOCK(idx) (&buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)]) 770 #define HDR_LOCK(hdr) \ 771 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth))) 772 773 uint64_t zfs_crc64_table[256]; 774 775 /* 776 * Level 2 ARC 777 */ 778 779 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */ 780 #define L2ARC_HEADROOM 2 /* num of writes */ 781 782 /* 783 * If we discover during ARC scan any buffers to be compressed, we boost 784 * our headroom for the next scanning cycle by this percentage multiple. 785 */ 786 #define L2ARC_HEADROOM_BOOST 200 787 #define L2ARC_FEED_SECS 1 /* caching interval secs */ 788 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */ 789 790 /* 791 * We can feed L2ARC from two states of ARC buffers, mru and mfu, 792 * and each of the state has two types: data and metadata. 793 */ 794 #define L2ARC_FEED_TYPES 4 795 796 /* L2ARC Performance Tunables */ 797 uint64_t l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */ 798 uint64_t l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */ 799 uint64_t l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */ 800 uint64_t l2arc_headroom_boost = L2ARC_HEADROOM_BOOST; 801 uint64_t l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */ 802 uint64_t l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */ 803 int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */ 804 int l2arc_feed_again = B_TRUE; /* turbo warmup */ 805 int l2arc_norw = B_FALSE; /* no reads during writes */ 806 static uint_t l2arc_meta_percent = 33; /* limit on headers size */ 807 808 /* 809 * L2ARC Internals 810 */ 811 static list_t L2ARC_dev_list; /* device list */ 812 static list_t *l2arc_dev_list; /* device list pointer */ 813 static kmutex_t l2arc_dev_mtx; /* device list mutex */ 814 static l2arc_dev_t *l2arc_dev_last; /* last device used */ 815 static list_t L2ARC_free_on_write; /* free after write buf list */ 816 static list_t *l2arc_free_on_write; /* free after write list ptr */ 817 static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */ 818 static uint64_t l2arc_ndev; /* number of devices */ 819 820 typedef struct l2arc_read_callback { 821 arc_buf_hdr_t *l2rcb_hdr; /* read header */ 822 blkptr_t l2rcb_bp; /* original blkptr */ 823 zbookmark_phys_t l2rcb_zb; /* original bookmark */ 824 int l2rcb_flags; /* original flags */ 825 abd_t *l2rcb_abd; /* temporary buffer */ 826 } l2arc_read_callback_t; 827 828 typedef struct l2arc_data_free { 829 /* protected by l2arc_free_on_write_mtx */ 830 abd_t *l2df_abd; 831 size_t l2df_size; 832 arc_buf_contents_t l2df_type; 833 list_node_t l2df_list_node; 834 } l2arc_data_free_t; 835 836 typedef enum arc_fill_flags { 837 ARC_FILL_LOCKED = 1 << 0, /* hdr lock is held */ 838 ARC_FILL_COMPRESSED = 1 << 1, /* fill with compressed data */ 839 ARC_FILL_ENCRYPTED = 1 << 2, /* fill with encrypted data */ 840 ARC_FILL_NOAUTH = 1 << 3, /* don't attempt to authenticate */ 841 ARC_FILL_IN_PLACE = 1 << 4 /* fill in place (special case) */ 842 } arc_fill_flags_t; 843 844 typedef enum arc_ovf_level { 845 ARC_OVF_NONE, /* ARC within target size. */ 846 ARC_OVF_SOME, /* ARC is slightly overflowed. */ 847 ARC_OVF_SEVERE /* ARC is severely overflowed. */ 848 } arc_ovf_level_t; 849 850 static kmutex_t l2arc_feed_thr_lock; 851 static kcondvar_t l2arc_feed_thr_cv; 852 static uint8_t l2arc_thread_exit; 853 854 static kmutex_t l2arc_rebuild_thr_lock; 855 static kcondvar_t l2arc_rebuild_thr_cv; 856 857 enum arc_hdr_alloc_flags { 858 ARC_HDR_ALLOC_RDATA = 0x1, 859 ARC_HDR_USE_RESERVE = 0x4, 860 ARC_HDR_ALLOC_LINEAR = 0x8, 861 }; 862 863 864 static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, const void *, int); 865 static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, const void *); 866 static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, const void *, int); 867 static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, const void *); 868 static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, const void *); 869 static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, 870 const void *tag); 871 static void arc_hdr_free_abd(arc_buf_hdr_t *, boolean_t); 872 static void arc_hdr_alloc_abd(arc_buf_hdr_t *, int); 873 static void arc_hdr_destroy(arc_buf_hdr_t *); 874 static void arc_access(arc_buf_hdr_t *, arc_flags_t, boolean_t); 875 static void arc_buf_watch(arc_buf_t *); 876 static void arc_change_state(arc_state_t *, arc_buf_hdr_t *); 877 878 static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *); 879 static uint32_t arc_bufc_to_flags(arc_buf_contents_t); 880 static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 881 static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 882 883 static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *); 884 static void l2arc_read_done(zio_t *); 885 static void l2arc_do_free_on_write(void); 886 static void l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 887 boolean_t state_only); 888 889 static void arc_prune_async(uint64_t adjust); 890 891 #define l2arc_hdr_arcstats_increment(hdr) \ 892 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_FALSE) 893 #define l2arc_hdr_arcstats_decrement(hdr) \ 894 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_FALSE) 895 #define l2arc_hdr_arcstats_increment_state(hdr) \ 896 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_TRUE) 897 #define l2arc_hdr_arcstats_decrement_state(hdr) \ 898 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_TRUE) 899 900 /* 901 * l2arc_exclude_special : A zfs module parameter that controls whether buffers 902 * present on special vdevs are eligibile for caching in L2ARC. If 903 * set to 1, exclude dbufs on special vdevs from being cached to 904 * L2ARC. 905 */ 906 int l2arc_exclude_special = 0; 907 908 /* 909 * l2arc_mfuonly : A ZFS module parameter that controls whether only MFU 910 * metadata and data are cached from ARC into L2ARC. 911 */ 912 static int l2arc_mfuonly = 0; 913 914 /* 915 * L2ARC TRIM 916 * l2arc_trim_ahead : A ZFS module parameter that controls how much ahead of 917 * the current write size (l2arc_write_max) we should TRIM if we 918 * have filled the device. It is defined as a percentage of the 919 * write size. If set to 100 we trim twice the space required to 920 * accommodate upcoming writes. A minimum of 64MB will be trimmed. 921 * It also enables TRIM of the whole L2ARC device upon creation or 922 * addition to an existing pool or if the header of the device is 923 * invalid upon importing a pool or onlining a cache device. The 924 * default is 0, which disables TRIM on L2ARC altogether as it can 925 * put significant stress on the underlying storage devices. This 926 * will vary depending of how well the specific device handles 927 * these commands. 928 */ 929 static uint64_t l2arc_trim_ahead = 0; 930 931 /* 932 * Performance tuning of L2ARC persistence: 933 * 934 * l2arc_rebuild_enabled : A ZFS module parameter that controls whether adding 935 * an L2ARC device (either at pool import or later) will attempt 936 * to rebuild L2ARC buffer contents. 937 * l2arc_rebuild_blocks_min_l2size : A ZFS module parameter that controls 938 * whether log blocks are written to the L2ARC device. If the L2ARC 939 * device is less than 1GB, the amount of data l2arc_evict() 940 * evicts is significant compared to the amount of restored L2ARC 941 * data. In this case do not write log blocks in L2ARC in order 942 * not to waste space. 943 */ 944 static int l2arc_rebuild_enabled = B_TRUE; 945 static uint64_t l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024; 946 947 /* L2ARC persistence rebuild control routines. */ 948 void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen); 949 static __attribute__((noreturn)) void l2arc_dev_rebuild_thread(void *arg); 950 static int l2arc_rebuild(l2arc_dev_t *dev); 951 952 /* L2ARC persistence read I/O routines. */ 953 static int l2arc_dev_hdr_read(l2arc_dev_t *dev); 954 static int l2arc_log_blk_read(l2arc_dev_t *dev, 955 const l2arc_log_blkptr_t *this_lp, const l2arc_log_blkptr_t *next_lp, 956 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 957 zio_t *this_io, zio_t **next_io); 958 static zio_t *l2arc_log_blk_fetch(vdev_t *vd, 959 const l2arc_log_blkptr_t *lp, l2arc_log_blk_phys_t *lb); 960 static void l2arc_log_blk_fetch_abort(zio_t *zio); 961 962 /* L2ARC persistence block restoration routines. */ 963 static void l2arc_log_blk_restore(l2arc_dev_t *dev, 964 const l2arc_log_blk_phys_t *lb, uint64_t lb_asize); 965 static void l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, 966 l2arc_dev_t *dev); 967 968 /* L2ARC persistence write I/O routines. */ 969 static uint64_t l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, 970 l2arc_write_callback_t *cb); 971 972 /* L2ARC persistence auxiliary routines. */ 973 boolean_t l2arc_log_blkptr_valid(l2arc_dev_t *dev, 974 const l2arc_log_blkptr_t *lbp); 975 static boolean_t l2arc_log_blk_insert(l2arc_dev_t *dev, 976 const arc_buf_hdr_t *ab); 977 boolean_t l2arc_range_check_overlap(uint64_t bottom, 978 uint64_t top, uint64_t check); 979 static void l2arc_blk_fetch_done(zio_t *zio); 980 static inline uint64_t 981 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev); 982 983 /* 984 * We use Cityhash for this. It's fast, and has good hash properties without 985 * requiring any large static buffers. 986 */ 987 static uint64_t 988 buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth) 989 { 990 return (cityhash4(spa, dva->dva_word[0], dva->dva_word[1], birth)); 991 } 992 993 #define HDR_EMPTY(hdr) \ 994 ((hdr)->b_dva.dva_word[0] == 0 && \ 995 (hdr)->b_dva.dva_word[1] == 0) 996 997 #define HDR_EMPTY_OR_LOCKED(hdr) \ 998 (HDR_EMPTY(hdr) || MUTEX_HELD(HDR_LOCK(hdr))) 999 1000 #define HDR_EQUAL(spa, dva, birth, hdr) \ 1001 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \ 1002 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \ 1003 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa) 1004 1005 static void 1006 buf_discard_identity(arc_buf_hdr_t *hdr) 1007 { 1008 hdr->b_dva.dva_word[0] = 0; 1009 hdr->b_dva.dva_word[1] = 0; 1010 hdr->b_birth = 0; 1011 } 1012 1013 static arc_buf_hdr_t * 1014 buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp) 1015 { 1016 const dva_t *dva = BP_IDENTITY(bp); 1017 uint64_t birth = BP_PHYSICAL_BIRTH(bp); 1018 uint64_t idx = BUF_HASH_INDEX(spa, dva, birth); 1019 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1020 arc_buf_hdr_t *hdr; 1021 1022 mutex_enter(hash_lock); 1023 for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL; 1024 hdr = hdr->b_hash_next) { 1025 if (HDR_EQUAL(spa, dva, birth, hdr)) { 1026 *lockp = hash_lock; 1027 return (hdr); 1028 } 1029 } 1030 mutex_exit(hash_lock); 1031 *lockp = NULL; 1032 return (NULL); 1033 } 1034 1035 /* 1036 * Insert an entry into the hash table. If there is already an element 1037 * equal to elem in the hash table, then the already existing element 1038 * will be returned and the new element will not be inserted. 1039 * Otherwise returns NULL. 1040 * If lockp == NULL, the caller is assumed to already hold the hash lock. 1041 */ 1042 static arc_buf_hdr_t * 1043 buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp) 1044 { 1045 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1046 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1047 arc_buf_hdr_t *fhdr; 1048 uint32_t i; 1049 1050 ASSERT(!DVA_IS_EMPTY(&hdr->b_dva)); 1051 ASSERT(hdr->b_birth != 0); 1052 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 1053 1054 if (lockp != NULL) { 1055 *lockp = hash_lock; 1056 mutex_enter(hash_lock); 1057 } else { 1058 ASSERT(MUTEX_HELD(hash_lock)); 1059 } 1060 1061 for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL; 1062 fhdr = fhdr->b_hash_next, i++) { 1063 if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr)) 1064 return (fhdr); 1065 } 1066 1067 hdr->b_hash_next = buf_hash_table.ht_table[idx]; 1068 buf_hash_table.ht_table[idx] = hdr; 1069 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1070 1071 /* collect some hash table performance data */ 1072 if (i > 0) { 1073 ARCSTAT_BUMP(arcstat_hash_collisions); 1074 if (i == 1) 1075 ARCSTAT_BUMP(arcstat_hash_chains); 1076 1077 ARCSTAT_MAX(arcstat_hash_chain_max, i); 1078 } 1079 uint64_t he = atomic_inc_64_nv( 1080 &arc_stats.arcstat_hash_elements.value.ui64); 1081 ARCSTAT_MAX(arcstat_hash_elements_max, he); 1082 1083 return (NULL); 1084 } 1085 1086 static void 1087 buf_hash_remove(arc_buf_hdr_t *hdr) 1088 { 1089 arc_buf_hdr_t *fhdr, **hdrp; 1090 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1091 1092 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx))); 1093 ASSERT(HDR_IN_HASH_TABLE(hdr)); 1094 1095 hdrp = &buf_hash_table.ht_table[idx]; 1096 while ((fhdr = *hdrp) != hdr) { 1097 ASSERT3P(fhdr, !=, NULL); 1098 hdrp = &fhdr->b_hash_next; 1099 } 1100 *hdrp = hdr->b_hash_next; 1101 hdr->b_hash_next = NULL; 1102 arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1103 1104 /* collect some hash table performance data */ 1105 atomic_dec_64(&arc_stats.arcstat_hash_elements.value.ui64); 1106 1107 if (buf_hash_table.ht_table[idx] && 1108 buf_hash_table.ht_table[idx]->b_hash_next == NULL) 1109 ARCSTAT_BUMPDOWN(arcstat_hash_chains); 1110 } 1111 1112 /* 1113 * Global data structures and functions for the buf kmem cache. 1114 */ 1115 1116 static kmem_cache_t *hdr_full_cache; 1117 static kmem_cache_t *hdr_l2only_cache; 1118 static kmem_cache_t *buf_cache; 1119 1120 static void 1121 buf_fini(void) 1122 { 1123 #if defined(_KERNEL) 1124 /* 1125 * Large allocations which do not require contiguous pages 1126 * should be using vmem_free() in the linux kernel\ 1127 */ 1128 vmem_free(buf_hash_table.ht_table, 1129 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1130 #else 1131 kmem_free(buf_hash_table.ht_table, 1132 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1133 #endif 1134 for (int i = 0; i < BUF_LOCKS; i++) 1135 mutex_destroy(BUF_HASH_LOCK(i)); 1136 kmem_cache_destroy(hdr_full_cache); 1137 kmem_cache_destroy(hdr_l2only_cache); 1138 kmem_cache_destroy(buf_cache); 1139 } 1140 1141 /* 1142 * Constructor callback - called when the cache is empty 1143 * and a new buf is requested. 1144 */ 1145 static int 1146 hdr_full_cons(void *vbuf, void *unused, int kmflag) 1147 { 1148 (void) unused, (void) kmflag; 1149 arc_buf_hdr_t *hdr = vbuf; 1150 1151 memset(hdr, 0, HDR_FULL_SIZE); 1152 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 1153 zfs_refcount_create(&hdr->b_l1hdr.b_refcnt); 1154 #ifdef ZFS_DEBUG 1155 mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL); 1156 #endif 1157 multilist_link_init(&hdr->b_l1hdr.b_arc_node); 1158 list_link_init(&hdr->b_l2hdr.b_l2node); 1159 arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1160 1161 return (0); 1162 } 1163 1164 static int 1165 hdr_l2only_cons(void *vbuf, void *unused, int kmflag) 1166 { 1167 (void) unused, (void) kmflag; 1168 arc_buf_hdr_t *hdr = vbuf; 1169 1170 memset(hdr, 0, HDR_L2ONLY_SIZE); 1171 arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1172 1173 return (0); 1174 } 1175 1176 static int 1177 buf_cons(void *vbuf, void *unused, int kmflag) 1178 { 1179 (void) unused, (void) kmflag; 1180 arc_buf_t *buf = vbuf; 1181 1182 memset(buf, 0, sizeof (arc_buf_t)); 1183 arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1184 1185 return (0); 1186 } 1187 1188 /* 1189 * Destructor callback - called when a cached buf is 1190 * no longer required. 1191 */ 1192 static void 1193 hdr_full_dest(void *vbuf, void *unused) 1194 { 1195 (void) unused; 1196 arc_buf_hdr_t *hdr = vbuf; 1197 1198 ASSERT(HDR_EMPTY(hdr)); 1199 zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt); 1200 #ifdef ZFS_DEBUG 1201 mutex_destroy(&hdr->b_l1hdr.b_freeze_lock); 1202 #endif 1203 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 1204 arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1205 } 1206 1207 static void 1208 hdr_l2only_dest(void *vbuf, void *unused) 1209 { 1210 (void) unused; 1211 arc_buf_hdr_t *hdr = vbuf; 1212 1213 ASSERT(HDR_EMPTY(hdr)); 1214 arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1215 } 1216 1217 static void 1218 buf_dest(void *vbuf, void *unused) 1219 { 1220 (void) unused; 1221 (void) vbuf; 1222 1223 arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1224 } 1225 1226 static void 1227 buf_init(void) 1228 { 1229 uint64_t *ct = NULL; 1230 uint64_t hsize = 1ULL << 12; 1231 int i, j; 1232 1233 /* 1234 * The hash table is big enough to fill all of physical memory 1235 * with an average block size of zfs_arc_average_blocksize (default 8K). 1236 * By default, the table will take up 1237 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers). 1238 */ 1239 while (hsize * zfs_arc_average_blocksize < arc_all_memory()) 1240 hsize <<= 1; 1241 retry: 1242 buf_hash_table.ht_mask = hsize - 1; 1243 #if defined(_KERNEL) 1244 /* 1245 * Large allocations which do not require contiguous pages 1246 * should be using vmem_alloc() in the linux kernel 1247 */ 1248 buf_hash_table.ht_table = 1249 vmem_zalloc(hsize * sizeof (void*), KM_SLEEP); 1250 #else 1251 buf_hash_table.ht_table = 1252 kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP); 1253 #endif 1254 if (buf_hash_table.ht_table == NULL) { 1255 ASSERT(hsize > (1ULL << 8)); 1256 hsize >>= 1; 1257 goto retry; 1258 } 1259 1260 hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE, 1261 0, hdr_full_cons, hdr_full_dest, NULL, NULL, NULL, 0); 1262 hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only", 1263 HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, NULL, 1264 NULL, NULL, 0); 1265 buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t), 1266 0, buf_cons, buf_dest, NULL, NULL, NULL, 0); 1267 1268 for (i = 0; i < 256; i++) 1269 for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--) 1270 *ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY); 1271 1272 for (i = 0; i < BUF_LOCKS; i++) 1273 mutex_init(BUF_HASH_LOCK(i), NULL, MUTEX_DEFAULT, NULL); 1274 } 1275 1276 #define ARC_MINTIME (hz>>4) /* 62 ms */ 1277 1278 /* 1279 * This is the size that the buf occupies in memory. If the buf is compressed, 1280 * it will correspond to the compressed size. You should use this method of 1281 * getting the buf size unless you explicitly need the logical size. 1282 */ 1283 uint64_t 1284 arc_buf_size(arc_buf_t *buf) 1285 { 1286 return (ARC_BUF_COMPRESSED(buf) ? 1287 HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr)); 1288 } 1289 1290 uint64_t 1291 arc_buf_lsize(arc_buf_t *buf) 1292 { 1293 return (HDR_GET_LSIZE(buf->b_hdr)); 1294 } 1295 1296 /* 1297 * This function will return B_TRUE if the buffer is encrypted in memory. 1298 * This buffer can be decrypted by calling arc_untransform(). 1299 */ 1300 boolean_t 1301 arc_is_encrypted(arc_buf_t *buf) 1302 { 1303 return (ARC_BUF_ENCRYPTED(buf) != 0); 1304 } 1305 1306 /* 1307 * Returns B_TRUE if the buffer represents data that has not had its MAC 1308 * verified yet. 1309 */ 1310 boolean_t 1311 arc_is_unauthenticated(arc_buf_t *buf) 1312 { 1313 return (HDR_NOAUTH(buf->b_hdr) != 0); 1314 } 1315 1316 void 1317 arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt, 1318 uint8_t *iv, uint8_t *mac) 1319 { 1320 arc_buf_hdr_t *hdr = buf->b_hdr; 1321 1322 ASSERT(HDR_PROTECTED(hdr)); 1323 1324 memcpy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 1325 memcpy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 1326 memcpy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 1327 *byteorder = (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 1328 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 1329 } 1330 1331 /* 1332 * Indicates how this buffer is compressed in memory. If it is not compressed 1333 * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with 1334 * arc_untransform() as long as it is also unencrypted. 1335 */ 1336 enum zio_compress 1337 arc_get_compression(arc_buf_t *buf) 1338 { 1339 return (ARC_BUF_COMPRESSED(buf) ? 1340 HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF); 1341 } 1342 1343 /* 1344 * Return the compression algorithm used to store this data in the ARC. If ARC 1345 * compression is enabled or this is an encrypted block, this will be the same 1346 * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF. 1347 */ 1348 static inline enum zio_compress 1349 arc_hdr_get_compress(arc_buf_hdr_t *hdr) 1350 { 1351 return (HDR_COMPRESSION_ENABLED(hdr) ? 1352 HDR_GET_COMPRESS(hdr) : ZIO_COMPRESS_OFF); 1353 } 1354 1355 uint8_t 1356 arc_get_complevel(arc_buf_t *buf) 1357 { 1358 return (buf->b_hdr->b_complevel); 1359 } 1360 1361 static inline boolean_t 1362 arc_buf_is_shared(arc_buf_t *buf) 1363 { 1364 boolean_t shared = (buf->b_data != NULL && 1365 buf->b_hdr->b_l1hdr.b_pabd != NULL && 1366 abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) && 1367 buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd)); 1368 IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr)); 1369 EQUIV(shared, ARC_BUF_SHARED(buf)); 1370 IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf)); 1371 1372 /* 1373 * It would be nice to assert arc_can_share() too, but the "hdr isn't 1374 * already being shared" requirement prevents us from doing that. 1375 */ 1376 1377 return (shared); 1378 } 1379 1380 /* 1381 * Free the checksum associated with this header. If there is no checksum, this 1382 * is a no-op. 1383 */ 1384 static inline void 1385 arc_cksum_free(arc_buf_hdr_t *hdr) 1386 { 1387 #ifdef ZFS_DEBUG 1388 ASSERT(HDR_HAS_L1HDR(hdr)); 1389 1390 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1391 if (hdr->b_l1hdr.b_freeze_cksum != NULL) { 1392 kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t)); 1393 hdr->b_l1hdr.b_freeze_cksum = NULL; 1394 } 1395 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1396 #endif 1397 } 1398 1399 /* 1400 * Return true iff at least one of the bufs on hdr is not compressed. 1401 * Encrypted buffers count as compressed. 1402 */ 1403 static boolean_t 1404 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr) 1405 { 1406 ASSERT(hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY_OR_LOCKED(hdr)); 1407 1408 for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) { 1409 if (!ARC_BUF_COMPRESSED(b)) { 1410 return (B_TRUE); 1411 } 1412 } 1413 return (B_FALSE); 1414 } 1415 1416 1417 /* 1418 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data 1419 * matches the checksum that is stored in the hdr. If there is no checksum, 1420 * or if the buf is compressed, this is a no-op. 1421 */ 1422 static void 1423 arc_cksum_verify(arc_buf_t *buf) 1424 { 1425 #ifdef ZFS_DEBUG 1426 arc_buf_hdr_t *hdr = buf->b_hdr; 1427 zio_cksum_t zc; 1428 1429 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1430 return; 1431 1432 if (ARC_BUF_COMPRESSED(buf)) 1433 return; 1434 1435 ASSERT(HDR_HAS_L1HDR(hdr)); 1436 1437 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1438 1439 if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) { 1440 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1441 return; 1442 } 1443 1444 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc); 1445 if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc)) 1446 panic("buffer modified while frozen!"); 1447 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1448 #endif 1449 } 1450 1451 /* 1452 * This function makes the assumption that data stored in the L2ARC 1453 * will be transformed exactly as it is in the main pool. Because of 1454 * this we can verify the checksum against the reading process's bp. 1455 */ 1456 static boolean_t 1457 arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio) 1458 { 1459 ASSERT(!BP_IS_EMBEDDED(zio->io_bp)); 1460 VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr)); 1461 1462 /* 1463 * Block pointers always store the checksum for the logical data. 1464 * If the block pointer has the gang bit set, then the checksum 1465 * it represents is for the reconstituted data and not for an 1466 * individual gang member. The zio pipeline, however, must be able to 1467 * determine the checksum of each of the gang constituents so it 1468 * treats the checksum comparison differently than what we need 1469 * for l2arc blocks. This prevents us from using the 1470 * zio_checksum_error() interface directly. Instead we must call the 1471 * zio_checksum_error_impl() so that we can ensure the checksum is 1472 * generated using the correct checksum algorithm and accounts for the 1473 * logical I/O size and not just a gang fragment. 1474 */ 1475 return (zio_checksum_error_impl(zio->io_spa, zio->io_bp, 1476 BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size, 1477 zio->io_offset, NULL) == 0); 1478 } 1479 1480 /* 1481 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a 1482 * checksum and attaches it to the buf's hdr so that we can ensure that the buf 1483 * isn't modified later on. If buf is compressed or there is already a checksum 1484 * on the hdr, this is a no-op (we only checksum uncompressed bufs). 1485 */ 1486 static void 1487 arc_cksum_compute(arc_buf_t *buf) 1488 { 1489 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1490 return; 1491 1492 #ifdef ZFS_DEBUG 1493 arc_buf_hdr_t *hdr = buf->b_hdr; 1494 ASSERT(HDR_HAS_L1HDR(hdr)); 1495 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1496 if (hdr->b_l1hdr.b_freeze_cksum != NULL || ARC_BUF_COMPRESSED(buf)) { 1497 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1498 return; 1499 } 1500 1501 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 1502 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1503 hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t), 1504 KM_SLEEP); 1505 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, 1506 hdr->b_l1hdr.b_freeze_cksum); 1507 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1508 #endif 1509 arc_buf_watch(buf); 1510 } 1511 1512 #ifndef _KERNEL 1513 void 1514 arc_buf_sigsegv(int sig, siginfo_t *si, void *unused) 1515 { 1516 (void) sig, (void) unused; 1517 panic("Got SIGSEGV at address: 0x%lx\n", (long)si->si_addr); 1518 } 1519 #endif 1520 1521 static void 1522 arc_buf_unwatch(arc_buf_t *buf) 1523 { 1524 #ifndef _KERNEL 1525 if (arc_watch) { 1526 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1527 PROT_READ | PROT_WRITE)); 1528 } 1529 #else 1530 (void) buf; 1531 #endif 1532 } 1533 1534 static void 1535 arc_buf_watch(arc_buf_t *buf) 1536 { 1537 #ifndef _KERNEL 1538 if (arc_watch) 1539 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1540 PROT_READ)); 1541 #else 1542 (void) buf; 1543 #endif 1544 } 1545 1546 static arc_buf_contents_t 1547 arc_buf_type(arc_buf_hdr_t *hdr) 1548 { 1549 arc_buf_contents_t type; 1550 if (HDR_ISTYPE_METADATA(hdr)) { 1551 type = ARC_BUFC_METADATA; 1552 } else { 1553 type = ARC_BUFC_DATA; 1554 } 1555 VERIFY3U(hdr->b_type, ==, type); 1556 return (type); 1557 } 1558 1559 boolean_t 1560 arc_is_metadata(arc_buf_t *buf) 1561 { 1562 return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0); 1563 } 1564 1565 static uint32_t 1566 arc_bufc_to_flags(arc_buf_contents_t type) 1567 { 1568 switch (type) { 1569 case ARC_BUFC_DATA: 1570 /* metadata field is 0 if buffer contains normal data */ 1571 return (0); 1572 case ARC_BUFC_METADATA: 1573 return (ARC_FLAG_BUFC_METADATA); 1574 default: 1575 break; 1576 } 1577 panic("undefined ARC buffer type!"); 1578 return ((uint32_t)-1); 1579 } 1580 1581 void 1582 arc_buf_thaw(arc_buf_t *buf) 1583 { 1584 arc_buf_hdr_t *hdr = buf->b_hdr; 1585 1586 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 1587 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 1588 1589 arc_cksum_verify(buf); 1590 1591 /* 1592 * Compressed buffers do not manipulate the b_freeze_cksum. 1593 */ 1594 if (ARC_BUF_COMPRESSED(buf)) 1595 return; 1596 1597 ASSERT(HDR_HAS_L1HDR(hdr)); 1598 arc_cksum_free(hdr); 1599 arc_buf_unwatch(buf); 1600 } 1601 1602 void 1603 arc_buf_freeze(arc_buf_t *buf) 1604 { 1605 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1606 return; 1607 1608 if (ARC_BUF_COMPRESSED(buf)) 1609 return; 1610 1611 ASSERT(HDR_HAS_L1HDR(buf->b_hdr)); 1612 arc_cksum_compute(buf); 1613 } 1614 1615 /* 1616 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead, 1617 * the following functions should be used to ensure that the flags are 1618 * updated in a thread-safe way. When manipulating the flags either 1619 * the hash_lock must be held or the hdr must be undiscoverable. This 1620 * ensures that we're not racing with any other threads when updating 1621 * the flags. 1622 */ 1623 static inline void 1624 arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1625 { 1626 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1627 hdr->b_flags |= flags; 1628 } 1629 1630 static inline void 1631 arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1632 { 1633 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1634 hdr->b_flags &= ~flags; 1635 } 1636 1637 /* 1638 * Setting the compression bits in the arc_buf_hdr_t's b_flags is 1639 * done in a special way since we have to clear and set bits 1640 * at the same time. Consumers that wish to set the compression bits 1641 * must use this function to ensure that the flags are updated in 1642 * thread-safe manner. 1643 */ 1644 static void 1645 arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp) 1646 { 1647 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1648 1649 /* 1650 * Holes and embedded blocks will always have a psize = 0 so 1651 * we ignore the compression of the blkptr and set the 1652 * want to uncompress them. Mark them as uncompressed. 1653 */ 1654 if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) { 1655 arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1656 ASSERT(!HDR_COMPRESSION_ENABLED(hdr)); 1657 } else { 1658 arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1659 ASSERT(HDR_COMPRESSION_ENABLED(hdr)); 1660 } 1661 1662 HDR_SET_COMPRESS(hdr, cmp); 1663 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp); 1664 } 1665 1666 /* 1667 * Looks for another buf on the same hdr which has the data decompressed, copies 1668 * from it, and returns true. If no such buf exists, returns false. 1669 */ 1670 static boolean_t 1671 arc_buf_try_copy_decompressed_data(arc_buf_t *buf) 1672 { 1673 arc_buf_hdr_t *hdr = buf->b_hdr; 1674 boolean_t copied = B_FALSE; 1675 1676 ASSERT(HDR_HAS_L1HDR(hdr)); 1677 ASSERT3P(buf->b_data, !=, NULL); 1678 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1679 1680 for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL; 1681 from = from->b_next) { 1682 /* can't use our own data buffer */ 1683 if (from == buf) { 1684 continue; 1685 } 1686 1687 if (!ARC_BUF_COMPRESSED(from)) { 1688 memcpy(buf->b_data, from->b_data, arc_buf_size(buf)); 1689 copied = B_TRUE; 1690 break; 1691 } 1692 } 1693 1694 #ifdef ZFS_DEBUG 1695 /* 1696 * There were no decompressed bufs, so there should not be a 1697 * checksum on the hdr either. 1698 */ 1699 if (zfs_flags & ZFS_DEBUG_MODIFY) 1700 EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL); 1701 #endif 1702 1703 return (copied); 1704 } 1705 1706 /* 1707 * Allocates an ARC buf header that's in an evicted & L2-cached state. 1708 * This is used during l2arc reconstruction to make empty ARC buffers 1709 * which circumvent the regular disk->arc->l2arc path and instead come 1710 * into being in the reverse order, i.e. l2arc->arc. 1711 */ 1712 static arc_buf_hdr_t * 1713 arc_buf_alloc_l2only(size_t size, arc_buf_contents_t type, l2arc_dev_t *dev, 1714 dva_t dva, uint64_t daddr, int32_t psize, uint64_t birth, 1715 enum zio_compress compress, uint8_t complevel, boolean_t protected, 1716 boolean_t prefetch, arc_state_type_t arcs_state) 1717 { 1718 arc_buf_hdr_t *hdr; 1719 1720 ASSERT(size != 0); 1721 hdr = kmem_cache_alloc(hdr_l2only_cache, KM_SLEEP); 1722 hdr->b_birth = birth; 1723 hdr->b_type = type; 1724 hdr->b_flags = 0; 1725 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L2HDR); 1726 HDR_SET_LSIZE(hdr, size); 1727 HDR_SET_PSIZE(hdr, psize); 1728 arc_hdr_set_compress(hdr, compress); 1729 hdr->b_complevel = complevel; 1730 if (protected) 1731 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 1732 if (prefetch) 1733 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 1734 hdr->b_spa = spa_load_guid(dev->l2ad_vdev->vdev_spa); 1735 1736 hdr->b_dva = dva; 1737 1738 hdr->b_l2hdr.b_dev = dev; 1739 hdr->b_l2hdr.b_daddr = daddr; 1740 hdr->b_l2hdr.b_arcs_state = arcs_state; 1741 1742 return (hdr); 1743 } 1744 1745 /* 1746 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t. 1747 */ 1748 static uint64_t 1749 arc_hdr_size(arc_buf_hdr_t *hdr) 1750 { 1751 uint64_t size; 1752 1753 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 1754 HDR_GET_PSIZE(hdr) > 0) { 1755 size = HDR_GET_PSIZE(hdr); 1756 } else { 1757 ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0); 1758 size = HDR_GET_LSIZE(hdr); 1759 } 1760 return (size); 1761 } 1762 1763 static int 1764 arc_hdr_authenticate(arc_buf_hdr_t *hdr, spa_t *spa, uint64_t dsobj) 1765 { 1766 int ret; 1767 uint64_t csize; 1768 uint64_t lsize = HDR_GET_LSIZE(hdr); 1769 uint64_t psize = HDR_GET_PSIZE(hdr); 1770 void *tmpbuf = NULL; 1771 abd_t *abd = hdr->b_l1hdr.b_pabd; 1772 1773 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1774 ASSERT(HDR_AUTHENTICATED(hdr)); 1775 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1776 1777 /* 1778 * The MAC is calculated on the compressed data that is stored on disk. 1779 * However, if compressed arc is disabled we will only have the 1780 * decompressed data available to us now. Compress it into a temporary 1781 * abd so we can verify the MAC. The performance overhead of this will 1782 * be relatively low, since most objects in an encrypted objset will 1783 * be encrypted (instead of authenticated) anyway. 1784 */ 1785 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1786 !HDR_COMPRESSION_ENABLED(hdr)) { 1787 1788 csize = zio_compress_data(HDR_GET_COMPRESS(hdr), 1789 hdr->b_l1hdr.b_pabd, &tmpbuf, lsize, hdr->b_complevel); 1790 ASSERT3P(tmpbuf, !=, NULL); 1791 ASSERT3U(csize, <=, psize); 1792 abd = abd_get_from_buf(tmpbuf, lsize); 1793 abd_take_ownership_of_buf(abd, B_TRUE); 1794 abd_zero_off(abd, csize, psize - csize); 1795 } 1796 1797 /* 1798 * Authentication is best effort. We authenticate whenever the key is 1799 * available. If we succeed we clear ARC_FLAG_NOAUTH. 1800 */ 1801 if (hdr->b_crypt_hdr.b_ot == DMU_OT_OBJSET) { 1802 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF); 1803 ASSERT3U(lsize, ==, psize); 1804 ret = spa_do_crypt_objset_mac_abd(B_FALSE, spa, dsobj, abd, 1805 psize, hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1806 } else { 1807 ret = spa_do_crypt_mac_abd(B_FALSE, spa, dsobj, abd, psize, 1808 hdr->b_crypt_hdr.b_mac); 1809 } 1810 1811 if (ret == 0) 1812 arc_hdr_clear_flags(hdr, ARC_FLAG_NOAUTH); 1813 else if (ret != ENOENT) 1814 goto error; 1815 1816 if (tmpbuf != NULL) 1817 abd_free(abd); 1818 1819 return (0); 1820 1821 error: 1822 if (tmpbuf != NULL) 1823 abd_free(abd); 1824 1825 return (ret); 1826 } 1827 1828 /* 1829 * This function will take a header that only has raw encrypted data in 1830 * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in 1831 * b_l1hdr.b_pabd. If designated in the header flags, this function will 1832 * also decompress the data. 1833 */ 1834 static int 1835 arc_hdr_decrypt(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb) 1836 { 1837 int ret; 1838 abd_t *cabd = NULL; 1839 void *tmp = NULL; 1840 boolean_t no_crypt = B_FALSE; 1841 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1842 1843 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1844 ASSERT(HDR_ENCRYPTED(hdr)); 1845 1846 arc_hdr_alloc_abd(hdr, 0); 1847 1848 ret = spa_do_crypt_abd(B_FALSE, spa, zb, hdr->b_crypt_hdr.b_ot, 1849 B_FALSE, bswap, hdr->b_crypt_hdr.b_salt, hdr->b_crypt_hdr.b_iv, 1850 hdr->b_crypt_hdr.b_mac, HDR_GET_PSIZE(hdr), hdr->b_l1hdr.b_pabd, 1851 hdr->b_crypt_hdr.b_rabd, &no_crypt); 1852 if (ret != 0) 1853 goto error; 1854 1855 if (no_crypt) { 1856 abd_copy(hdr->b_l1hdr.b_pabd, hdr->b_crypt_hdr.b_rabd, 1857 HDR_GET_PSIZE(hdr)); 1858 } 1859 1860 /* 1861 * If this header has disabled arc compression but the b_pabd is 1862 * compressed after decrypting it, we need to decompress the newly 1863 * decrypted data. 1864 */ 1865 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1866 !HDR_COMPRESSION_ENABLED(hdr)) { 1867 /* 1868 * We want to make sure that we are correctly honoring the 1869 * zfs_abd_scatter_enabled setting, so we allocate an abd here 1870 * and then loan a buffer from it, rather than allocating a 1871 * linear buffer and wrapping it in an abd later. 1872 */ 1873 cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 0); 1874 tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 1875 1876 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 1877 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 1878 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 1879 if (ret != 0) { 1880 abd_return_buf(cabd, tmp, arc_hdr_size(hdr)); 1881 goto error; 1882 } 1883 1884 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 1885 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 1886 arc_hdr_size(hdr), hdr); 1887 hdr->b_l1hdr.b_pabd = cabd; 1888 } 1889 1890 return (0); 1891 1892 error: 1893 arc_hdr_free_abd(hdr, B_FALSE); 1894 if (cabd != NULL) 1895 arc_free_data_buf(hdr, cabd, arc_hdr_size(hdr), hdr); 1896 1897 return (ret); 1898 } 1899 1900 /* 1901 * This function is called during arc_buf_fill() to prepare the header's 1902 * abd plaintext pointer for use. This involves authenticated protected 1903 * data and decrypting encrypted data into the plaintext abd. 1904 */ 1905 static int 1906 arc_fill_hdr_crypt(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, spa_t *spa, 1907 const zbookmark_phys_t *zb, boolean_t noauth) 1908 { 1909 int ret; 1910 1911 ASSERT(HDR_PROTECTED(hdr)); 1912 1913 if (hash_lock != NULL) 1914 mutex_enter(hash_lock); 1915 1916 if (HDR_NOAUTH(hdr) && !noauth) { 1917 /* 1918 * The caller requested authenticated data but our data has 1919 * not been authenticated yet. Verify the MAC now if we can. 1920 */ 1921 ret = arc_hdr_authenticate(hdr, spa, zb->zb_objset); 1922 if (ret != 0) 1923 goto error; 1924 } else if (HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd == NULL) { 1925 /* 1926 * If we only have the encrypted version of the data, but the 1927 * unencrypted version was requested we take this opportunity 1928 * to store the decrypted version in the header for future use. 1929 */ 1930 ret = arc_hdr_decrypt(hdr, spa, zb); 1931 if (ret != 0) 1932 goto error; 1933 } 1934 1935 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1936 1937 if (hash_lock != NULL) 1938 mutex_exit(hash_lock); 1939 1940 return (0); 1941 1942 error: 1943 if (hash_lock != NULL) 1944 mutex_exit(hash_lock); 1945 1946 return (ret); 1947 } 1948 1949 /* 1950 * This function is used by the dbuf code to decrypt bonus buffers in place. 1951 * The dbuf code itself doesn't have any locking for decrypting a shared dnode 1952 * block, so we use the hash lock here to protect against concurrent calls to 1953 * arc_buf_fill(). 1954 */ 1955 static void 1956 arc_buf_untransform_in_place(arc_buf_t *buf) 1957 { 1958 arc_buf_hdr_t *hdr = buf->b_hdr; 1959 1960 ASSERT(HDR_ENCRYPTED(hdr)); 1961 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 1962 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1963 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1964 1965 zio_crypt_copy_dnode_bonus(hdr->b_l1hdr.b_pabd, buf->b_data, 1966 arc_buf_size(buf)); 1967 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 1968 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 1969 } 1970 1971 /* 1972 * Given a buf that has a data buffer attached to it, this function will 1973 * efficiently fill the buf with data of the specified compression setting from 1974 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr 1975 * are already sharing a data buf, no copy is performed. 1976 * 1977 * If the buf is marked as compressed but uncompressed data was requested, this 1978 * will allocate a new data buffer for the buf, remove that flag, and fill the 1979 * buf with uncompressed data. You can't request a compressed buf on a hdr with 1980 * uncompressed data, and (since we haven't added support for it yet) if you 1981 * want compressed data your buf must already be marked as compressed and have 1982 * the correct-sized data buffer. 1983 */ 1984 static int 1985 arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 1986 arc_fill_flags_t flags) 1987 { 1988 int error = 0; 1989 arc_buf_hdr_t *hdr = buf->b_hdr; 1990 boolean_t hdr_compressed = 1991 (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 1992 boolean_t compressed = (flags & ARC_FILL_COMPRESSED) != 0; 1993 boolean_t encrypted = (flags & ARC_FILL_ENCRYPTED) != 0; 1994 dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap; 1995 kmutex_t *hash_lock = (flags & ARC_FILL_LOCKED) ? NULL : HDR_LOCK(hdr); 1996 1997 ASSERT3P(buf->b_data, !=, NULL); 1998 IMPLY(compressed, hdr_compressed || ARC_BUF_ENCRYPTED(buf)); 1999 IMPLY(compressed, ARC_BUF_COMPRESSED(buf)); 2000 IMPLY(encrypted, HDR_ENCRYPTED(hdr)); 2001 IMPLY(encrypted, ARC_BUF_ENCRYPTED(buf)); 2002 IMPLY(encrypted, ARC_BUF_COMPRESSED(buf)); 2003 IMPLY(encrypted, !arc_buf_is_shared(buf)); 2004 2005 /* 2006 * If the caller wanted encrypted data we just need to copy it from 2007 * b_rabd and potentially byteswap it. We won't be able to do any 2008 * further transforms on it. 2009 */ 2010 if (encrypted) { 2011 ASSERT(HDR_HAS_RABD(hdr)); 2012 abd_copy_to_buf(buf->b_data, hdr->b_crypt_hdr.b_rabd, 2013 HDR_GET_PSIZE(hdr)); 2014 goto byteswap; 2015 } 2016 2017 /* 2018 * Adjust encrypted and authenticated headers to accommodate 2019 * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are 2020 * allowed to fail decryption due to keys not being loaded 2021 * without being marked as an IO error. 2022 */ 2023 if (HDR_PROTECTED(hdr)) { 2024 error = arc_fill_hdr_crypt(hdr, hash_lock, spa, 2025 zb, !!(flags & ARC_FILL_NOAUTH)); 2026 if (error == EACCES && (flags & ARC_FILL_IN_PLACE) != 0) { 2027 return (error); 2028 } else if (error != 0) { 2029 if (hash_lock != NULL) 2030 mutex_enter(hash_lock); 2031 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2032 if (hash_lock != NULL) 2033 mutex_exit(hash_lock); 2034 return (error); 2035 } 2036 } 2037 2038 /* 2039 * There is a special case here for dnode blocks which are 2040 * decrypting their bonus buffers. These blocks may request to 2041 * be decrypted in-place. This is necessary because there may 2042 * be many dnodes pointing into this buffer and there is 2043 * currently no method to synchronize replacing the backing 2044 * b_data buffer and updating all of the pointers. Here we use 2045 * the hash lock to ensure there are no races. If the need 2046 * arises for other types to be decrypted in-place, they must 2047 * add handling here as well. 2048 */ 2049 if ((flags & ARC_FILL_IN_PLACE) != 0) { 2050 ASSERT(!hdr_compressed); 2051 ASSERT(!compressed); 2052 ASSERT(!encrypted); 2053 2054 if (HDR_ENCRYPTED(hdr) && ARC_BUF_ENCRYPTED(buf)) { 2055 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 2056 2057 if (hash_lock != NULL) 2058 mutex_enter(hash_lock); 2059 arc_buf_untransform_in_place(buf); 2060 if (hash_lock != NULL) 2061 mutex_exit(hash_lock); 2062 2063 /* Compute the hdr's checksum if necessary */ 2064 arc_cksum_compute(buf); 2065 } 2066 2067 return (0); 2068 } 2069 2070 if (hdr_compressed == compressed) { 2071 if (ARC_BUF_SHARED(buf)) { 2072 ASSERT(arc_buf_is_shared(buf)); 2073 } else { 2074 abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd, 2075 arc_buf_size(buf)); 2076 } 2077 } else { 2078 ASSERT(hdr_compressed); 2079 ASSERT(!compressed); 2080 2081 /* 2082 * If the buf is sharing its data with the hdr, unlink it and 2083 * allocate a new data buffer for the buf. 2084 */ 2085 if (ARC_BUF_SHARED(buf)) { 2086 ASSERT(ARC_BUF_COMPRESSED(buf)); 2087 2088 /* We need to give the buf its own b_data */ 2089 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 2090 buf->b_data = 2091 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2092 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 2093 2094 /* Previously overhead was 0; just add new overhead */ 2095 ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr)); 2096 } else if (ARC_BUF_COMPRESSED(buf)) { 2097 ASSERT(!arc_buf_is_shared(buf)); 2098 2099 /* We need to reallocate the buf's b_data */ 2100 arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr), 2101 buf); 2102 buf->b_data = 2103 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2104 2105 /* We increased the size of b_data; update overhead */ 2106 ARCSTAT_INCR(arcstat_overhead_size, 2107 HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr)); 2108 } 2109 2110 /* 2111 * Regardless of the buf's previous compression settings, it 2112 * should not be compressed at the end of this function. 2113 */ 2114 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 2115 2116 /* 2117 * Try copying the data from another buf which already has a 2118 * decompressed version. If that's not possible, it's time to 2119 * bite the bullet and decompress the data from the hdr. 2120 */ 2121 if (arc_buf_try_copy_decompressed_data(buf)) { 2122 /* Skip byteswapping and checksumming (already done) */ 2123 return (0); 2124 } else { 2125 error = zio_decompress_data(HDR_GET_COMPRESS(hdr), 2126 hdr->b_l1hdr.b_pabd, buf->b_data, 2127 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr), 2128 &hdr->b_complevel); 2129 2130 /* 2131 * Absent hardware errors or software bugs, this should 2132 * be impossible, but log it anyway so we can debug it. 2133 */ 2134 if (error != 0) { 2135 zfs_dbgmsg( 2136 "hdr %px, compress %d, psize %d, lsize %d", 2137 hdr, arc_hdr_get_compress(hdr), 2138 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr)); 2139 if (hash_lock != NULL) 2140 mutex_enter(hash_lock); 2141 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2142 if (hash_lock != NULL) 2143 mutex_exit(hash_lock); 2144 return (SET_ERROR(EIO)); 2145 } 2146 } 2147 } 2148 2149 byteswap: 2150 /* Byteswap the buf's data if necessary */ 2151 if (bswap != DMU_BSWAP_NUMFUNCS) { 2152 ASSERT(!HDR_SHARED_DATA(hdr)); 2153 ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS); 2154 dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr)); 2155 } 2156 2157 /* Compute the hdr's checksum if necessary */ 2158 arc_cksum_compute(buf); 2159 2160 return (0); 2161 } 2162 2163 /* 2164 * If this function is being called to decrypt an encrypted buffer or verify an 2165 * authenticated one, the key must be loaded and a mapping must be made 2166 * available in the keystore via spa_keystore_create_mapping() or one of its 2167 * callers. 2168 */ 2169 int 2170 arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2171 boolean_t in_place) 2172 { 2173 int ret; 2174 arc_fill_flags_t flags = 0; 2175 2176 if (in_place) 2177 flags |= ARC_FILL_IN_PLACE; 2178 2179 ret = arc_buf_fill(buf, spa, zb, flags); 2180 if (ret == ECKSUM) { 2181 /* 2182 * Convert authentication and decryption errors to EIO 2183 * (and generate an ereport) before leaving the ARC. 2184 */ 2185 ret = SET_ERROR(EIO); 2186 spa_log_error(spa, zb, &buf->b_hdr->b_birth); 2187 (void) zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION, 2188 spa, NULL, zb, NULL, 0); 2189 } 2190 2191 return (ret); 2192 } 2193 2194 /* 2195 * Increment the amount of evictable space in the arc_state_t's refcount. 2196 * We account for the space used by the hdr and the arc buf individually 2197 * so that we can add and remove them from the refcount individually. 2198 */ 2199 static void 2200 arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state) 2201 { 2202 arc_buf_contents_t type = arc_buf_type(hdr); 2203 2204 ASSERT(HDR_HAS_L1HDR(hdr)); 2205 2206 if (GHOST_STATE(state)) { 2207 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2208 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2209 ASSERT(!HDR_HAS_RABD(hdr)); 2210 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2211 HDR_GET_LSIZE(hdr), hdr); 2212 return; 2213 } 2214 2215 if (hdr->b_l1hdr.b_pabd != NULL) { 2216 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2217 arc_hdr_size(hdr), hdr); 2218 } 2219 if (HDR_HAS_RABD(hdr)) { 2220 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2221 HDR_GET_PSIZE(hdr), hdr); 2222 } 2223 2224 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2225 buf = buf->b_next) { 2226 if (ARC_BUF_SHARED(buf)) 2227 continue; 2228 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2229 arc_buf_size(buf), buf); 2230 } 2231 } 2232 2233 /* 2234 * Decrement the amount of evictable space in the arc_state_t's refcount. 2235 * We account for the space used by the hdr and the arc buf individually 2236 * so that we can add and remove them from the refcount individually. 2237 */ 2238 static void 2239 arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state) 2240 { 2241 arc_buf_contents_t type = arc_buf_type(hdr); 2242 2243 ASSERT(HDR_HAS_L1HDR(hdr)); 2244 2245 if (GHOST_STATE(state)) { 2246 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2247 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2248 ASSERT(!HDR_HAS_RABD(hdr)); 2249 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2250 HDR_GET_LSIZE(hdr), hdr); 2251 return; 2252 } 2253 2254 if (hdr->b_l1hdr.b_pabd != NULL) { 2255 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2256 arc_hdr_size(hdr), hdr); 2257 } 2258 if (HDR_HAS_RABD(hdr)) { 2259 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2260 HDR_GET_PSIZE(hdr), hdr); 2261 } 2262 2263 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2264 buf = buf->b_next) { 2265 if (ARC_BUF_SHARED(buf)) 2266 continue; 2267 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2268 arc_buf_size(buf), buf); 2269 } 2270 } 2271 2272 /* 2273 * Add a reference to this hdr indicating that someone is actively 2274 * referencing that memory. When the refcount transitions from 0 to 1, 2275 * we remove it from the respective arc_state_t list to indicate that 2276 * it is not evictable. 2277 */ 2278 static void 2279 add_reference(arc_buf_hdr_t *hdr, const void *tag) 2280 { 2281 arc_state_t *state = hdr->b_l1hdr.b_state; 2282 2283 ASSERT(HDR_HAS_L1HDR(hdr)); 2284 if (!HDR_EMPTY(hdr) && !MUTEX_HELD(HDR_LOCK(hdr))) { 2285 ASSERT(state == arc_anon); 2286 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2287 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2288 } 2289 2290 if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) && 2291 state != arc_anon && state != arc_l2c_only) { 2292 /* We don't use the L2-only state list. */ 2293 multilist_remove(&state->arcs_list[arc_buf_type(hdr)], hdr); 2294 arc_evictable_space_decrement(hdr, state); 2295 } 2296 } 2297 2298 /* 2299 * Remove a reference from this hdr. When the reference transitions from 2300 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's 2301 * list making it eligible for eviction. 2302 */ 2303 static int 2304 remove_reference(arc_buf_hdr_t *hdr, const void *tag) 2305 { 2306 int cnt; 2307 arc_state_t *state = hdr->b_l1hdr.b_state; 2308 2309 ASSERT(HDR_HAS_L1HDR(hdr)); 2310 ASSERT(state == arc_anon || MUTEX_HELD(HDR_LOCK(hdr))); 2311 ASSERT(!GHOST_STATE(state)); /* arc_l2c_only counts as a ghost. */ 2312 2313 if ((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) != 0) 2314 return (cnt); 2315 2316 if (state == arc_anon) { 2317 arc_hdr_destroy(hdr); 2318 return (0); 2319 } 2320 if (state == arc_uncached && !HDR_PREFETCH(hdr)) { 2321 arc_change_state(arc_anon, hdr); 2322 arc_hdr_destroy(hdr); 2323 return (0); 2324 } 2325 multilist_insert(&state->arcs_list[arc_buf_type(hdr)], hdr); 2326 arc_evictable_space_increment(hdr, state); 2327 return (0); 2328 } 2329 2330 /* 2331 * Returns detailed information about a specific arc buffer. When the 2332 * state_index argument is set the function will calculate the arc header 2333 * list position for its arc state. Since this requires a linear traversal 2334 * callers are strongly encourage not to do this. However, it can be helpful 2335 * for targeted analysis so the functionality is provided. 2336 */ 2337 void 2338 arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index) 2339 { 2340 (void) state_index; 2341 arc_buf_hdr_t *hdr = ab->b_hdr; 2342 l1arc_buf_hdr_t *l1hdr = NULL; 2343 l2arc_buf_hdr_t *l2hdr = NULL; 2344 arc_state_t *state = NULL; 2345 2346 memset(abi, 0, sizeof (arc_buf_info_t)); 2347 2348 if (hdr == NULL) 2349 return; 2350 2351 abi->abi_flags = hdr->b_flags; 2352 2353 if (HDR_HAS_L1HDR(hdr)) { 2354 l1hdr = &hdr->b_l1hdr; 2355 state = l1hdr->b_state; 2356 } 2357 if (HDR_HAS_L2HDR(hdr)) 2358 l2hdr = &hdr->b_l2hdr; 2359 2360 if (l1hdr) { 2361 abi->abi_bufcnt = 0; 2362 for (arc_buf_t *buf = l1hdr->b_buf; buf; buf = buf->b_next) 2363 abi->abi_bufcnt++; 2364 abi->abi_access = l1hdr->b_arc_access; 2365 abi->abi_mru_hits = l1hdr->b_mru_hits; 2366 abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits; 2367 abi->abi_mfu_hits = l1hdr->b_mfu_hits; 2368 abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits; 2369 abi->abi_holds = zfs_refcount_count(&l1hdr->b_refcnt); 2370 } 2371 2372 if (l2hdr) { 2373 abi->abi_l2arc_dattr = l2hdr->b_daddr; 2374 abi->abi_l2arc_hits = l2hdr->b_hits; 2375 } 2376 2377 abi->abi_state_type = state ? state->arcs_state : ARC_STATE_ANON; 2378 abi->abi_state_contents = arc_buf_type(hdr); 2379 abi->abi_size = arc_hdr_size(hdr); 2380 } 2381 2382 /* 2383 * Move the supplied buffer to the indicated state. The hash lock 2384 * for the buffer must be held by the caller. 2385 */ 2386 static void 2387 arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr) 2388 { 2389 arc_state_t *old_state; 2390 int64_t refcnt; 2391 boolean_t update_old, update_new; 2392 arc_buf_contents_t type = arc_buf_type(hdr); 2393 2394 /* 2395 * We almost always have an L1 hdr here, since we call arc_hdr_realloc() 2396 * in arc_read() when bringing a buffer out of the L2ARC. However, the 2397 * L1 hdr doesn't always exist when we change state to arc_anon before 2398 * destroying a header, in which case reallocating to add the L1 hdr is 2399 * pointless. 2400 */ 2401 if (HDR_HAS_L1HDR(hdr)) { 2402 old_state = hdr->b_l1hdr.b_state; 2403 refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt); 2404 update_old = (hdr->b_l1hdr.b_buf != NULL || 2405 hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 2406 2407 IMPLY(GHOST_STATE(old_state), hdr->b_l1hdr.b_buf == NULL); 2408 IMPLY(GHOST_STATE(new_state), hdr->b_l1hdr.b_buf == NULL); 2409 IMPLY(old_state == arc_anon, hdr->b_l1hdr.b_buf == NULL || 2410 ARC_BUF_LAST(hdr->b_l1hdr.b_buf)); 2411 } else { 2412 old_state = arc_l2c_only; 2413 refcnt = 0; 2414 update_old = B_FALSE; 2415 } 2416 update_new = update_old; 2417 if (GHOST_STATE(old_state)) 2418 update_old = B_TRUE; 2419 if (GHOST_STATE(new_state)) 2420 update_new = B_TRUE; 2421 2422 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 2423 ASSERT3P(new_state, !=, old_state); 2424 2425 /* 2426 * If this buffer is evictable, transfer it from the 2427 * old state list to the new state list. 2428 */ 2429 if (refcnt == 0) { 2430 if (old_state != arc_anon && old_state != arc_l2c_only) { 2431 ASSERT(HDR_HAS_L1HDR(hdr)); 2432 /* remove_reference() saves on insert. */ 2433 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2434 multilist_remove(&old_state->arcs_list[type], 2435 hdr); 2436 arc_evictable_space_decrement(hdr, old_state); 2437 } 2438 } 2439 if (new_state != arc_anon && new_state != arc_l2c_only) { 2440 /* 2441 * An L1 header always exists here, since if we're 2442 * moving to some L1-cached state (i.e. not l2c_only or 2443 * anonymous), we realloc the header to add an L1hdr 2444 * beforehand. 2445 */ 2446 ASSERT(HDR_HAS_L1HDR(hdr)); 2447 multilist_insert(&new_state->arcs_list[type], hdr); 2448 arc_evictable_space_increment(hdr, new_state); 2449 } 2450 } 2451 2452 ASSERT(!HDR_EMPTY(hdr)); 2453 if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr)) 2454 buf_hash_remove(hdr); 2455 2456 /* adjust state sizes (ignore arc_l2c_only) */ 2457 2458 if (update_new && new_state != arc_l2c_only) { 2459 ASSERT(HDR_HAS_L1HDR(hdr)); 2460 if (GHOST_STATE(new_state)) { 2461 2462 /* 2463 * When moving a header to a ghost state, we first 2464 * remove all arc buffers. Thus, we'll have no arc 2465 * buffer to use for the reference. As a result, we 2466 * use the arc header pointer for the reference. 2467 */ 2468 (void) zfs_refcount_add_many( 2469 &new_state->arcs_size[type], 2470 HDR_GET_LSIZE(hdr), hdr); 2471 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2472 ASSERT(!HDR_HAS_RABD(hdr)); 2473 } else { 2474 2475 /* 2476 * Each individual buffer holds a unique reference, 2477 * thus we must remove each of these references one 2478 * at a time. 2479 */ 2480 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2481 buf = buf->b_next) { 2482 2483 /* 2484 * When the arc_buf_t is sharing the data 2485 * block with the hdr, the owner of the 2486 * reference belongs to the hdr. Only 2487 * add to the refcount if the arc_buf_t is 2488 * not shared. 2489 */ 2490 if (ARC_BUF_SHARED(buf)) 2491 continue; 2492 2493 (void) zfs_refcount_add_many( 2494 &new_state->arcs_size[type], 2495 arc_buf_size(buf), buf); 2496 } 2497 2498 if (hdr->b_l1hdr.b_pabd != NULL) { 2499 (void) zfs_refcount_add_many( 2500 &new_state->arcs_size[type], 2501 arc_hdr_size(hdr), hdr); 2502 } 2503 2504 if (HDR_HAS_RABD(hdr)) { 2505 (void) zfs_refcount_add_many( 2506 &new_state->arcs_size[type], 2507 HDR_GET_PSIZE(hdr), hdr); 2508 } 2509 } 2510 } 2511 2512 if (update_old && old_state != arc_l2c_only) { 2513 ASSERT(HDR_HAS_L1HDR(hdr)); 2514 if (GHOST_STATE(old_state)) { 2515 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2516 ASSERT(!HDR_HAS_RABD(hdr)); 2517 2518 /* 2519 * When moving a header off of a ghost state, 2520 * the header will not contain any arc buffers. 2521 * We use the arc header pointer for the reference 2522 * which is exactly what we did when we put the 2523 * header on the ghost state. 2524 */ 2525 2526 (void) zfs_refcount_remove_many( 2527 &old_state->arcs_size[type], 2528 HDR_GET_LSIZE(hdr), hdr); 2529 } else { 2530 2531 /* 2532 * Each individual buffer holds a unique reference, 2533 * thus we must remove each of these references one 2534 * at a time. 2535 */ 2536 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2537 buf = buf->b_next) { 2538 2539 /* 2540 * When the arc_buf_t is sharing the data 2541 * block with the hdr, the owner of the 2542 * reference belongs to the hdr. Only 2543 * add to the refcount if the arc_buf_t is 2544 * not shared. 2545 */ 2546 if (ARC_BUF_SHARED(buf)) 2547 continue; 2548 2549 (void) zfs_refcount_remove_many( 2550 &old_state->arcs_size[type], 2551 arc_buf_size(buf), buf); 2552 } 2553 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 2554 HDR_HAS_RABD(hdr)); 2555 2556 if (hdr->b_l1hdr.b_pabd != NULL) { 2557 (void) zfs_refcount_remove_many( 2558 &old_state->arcs_size[type], 2559 arc_hdr_size(hdr), hdr); 2560 } 2561 2562 if (HDR_HAS_RABD(hdr)) { 2563 (void) zfs_refcount_remove_many( 2564 &old_state->arcs_size[type], 2565 HDR_GET_PSIZE(hdr), hdr); 2566 } 2567 } 2568 } 2569 2570 if (HDR_HAS_L1HDR(hdr)) { 2571 hdr->b_l1hdr.b_state = new_state; 2572 2573 if (HDR_HAS_L2HDR(hdr) && new_state != arc_l2c_only) { 2574 l2arc_hdr_arcstats_decrement_state(hdr); 2575 hdr->b_l2hdr.b_arcs_state = new_state->arcs_state; 2576 l2arc_hdr_arcstats_increment_state(hdr); 2577 } 2578 } 2579 } 2580 2581 void 2582 arc_space_consume(uint64_t space, arc_space_type_t type) 2583 { 2584 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2585 2586 switch (type) { 2587 default: 2588 break; 2589 case ARC_SPACE_DATA: 2590 ARCSTAT_INCR(arcstat_data_size, space); 2591 break; 2592 case ARC_SPACE_META: 2593 ARCSTAT_INCR(arcstat_metadata_size, space); 2594 break; 2595 case ARC_SPACE_BONUS: 2596 ARCSTAT_INCR(arcstat_bonus_size, space); 2597 break; 2598 case ARC_SPACE_DNODE: 2599 ARCSTAT_INCR(arcstat_dnode_size, space); 2600 break; 2601 case ARC_SPACE_DBUF: 2602 ARCSTAT_INCR(arcstat_dbuf_size, space); 2603 break; 2604 case ARC_SPACE_HDRS: 2605 ARCSTAT_INCR(arcstat_hdr_size, space); 2606 break; 2607 case ARC_SPACE_L2HDRS: 2608 aggsum_add(&arc_sums.arcstat_l2_hdr_size, space); 2609 break; 2610 case ARC_SPACE_ABD_CHUNK_WASTE: 2611 /* 2612 * Note: this includes space wasted by all scatter ABD's, not 2613 * just those allocated by the ARC. But the vast majority of 2614 * scatter ABD's come from the ARC, because other users are 2615 * very short-lived. 2616 */ 2617 ARCSTAT_INCR(arcstat_abd_chunk_waste_size, space); 2618 break; 2619 } 2620 2621 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) 2622 ARCSTAT_INCR(arcstat_meta_used, space); 2623 2624 aggsum_add(&arc_sums.arcstat_size, space); 2625 } 2626 2627 void 2628 arc_space_return(uint64_t space, arc_space_type_t type) 2629 { 2630 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2631 2632 switch (type) { 2633 default: 2634 break; 2635 case ARC_SPACE_DATA: 2636 ARCSTAT_INCR(arcstat_data_size, -space); 2637 break; 2638 case ARC_SPACE_META: 2639 ARCSTAT_INCR(arcstat_metadata_size, -space); 2640 break; 2641 case ARC_SPACE_BONUS: 2642 ARCSTAT_INCR(arcstat_bonus_size, -space); 2643 break; 2644 case ARC_SPACE_DNODE: 2645 ARCSTAT_INCR(arcstat_dnode_size, -space); 2646 break; 2647 case ARC_SPACE_DBUF: 2648 ARCSTAT_INCR(arcstat_dbuf_size, -space); 2649 break; 2650 case ARC_SPACE_HDRS: 2651 ARCSTAT_INCR(arcstat_hdr_size, -space); 2652 break; 2653 case ARC_SPACE_L2HDRS: 2654 aggsum_add(&arc_sums.arcstat_l2_hdr_size, -space); 2655 break; 2656 case ARC_SPACE_ABD_CHUNK_WASTE: 2657 ARCSTAT_INCR(arcstat_abd_chunk_waste_size, -space); 2658 break; 2659 } 2660 2661 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) 2662 ARCSTAT_INCR(arcstat_meta_used, -space); 2663 2664 ASSERT(aggsum_compare(&arc_sums.arcstat_size, space) >= 0); 2665 aggsum_add(&arc_sums.arcstat_size, -space); 2666 } 2667 2668 /* 2669 * Given a hdr and a buf, returns whether that buf can share its b_data buffer 2670 * with the hdr's b_pabd. 2671 */ 2672 static boolean_t 2673 arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2674 { 2675 /* 2676 * The criteria for sharing a hdr's data are: 2677 * 1. the buffer is not encrypted 2678 * 2. the hdr's compression matches the buf's compression 2679 * 3. the hdr doesn't need to be byteswapped 2680 * 4. the hdr isn't already being shared 2681 * 5. the buf is either compressed or it is the last buf in the hdr list 2682 * 2683 * Criterion #5 maintains the invariant that shared uncompressed 2684 * bufs must be the final buf in the hdr's b_buf list. Reading this, you 2685 * might ask, "if a compressed buf is allocated first, won't that be the 2686 * last thing in the list?", but in that case it's impossible to create 2687 * a shared uncompressed buf anyway (because the hdr must be compressed 2688 * to have the compressed buf). You might also think that #3 is 2689 * sufficient to make this guarantee, however it's possible 2690 * (specifically in the rare L2ARC write race mentioned in 2691 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that 2692 * is shareable, but wasn't at the time of its allocation. Rather than 2693 * allow a new shared uncompressed buf to be created and then shuffle 2694 * the list around to make it the last element, this simply disallows 2695 * sharing if the new buf isn't the first to be added. 2696 */ 2697 ASSERT3P(buf->b_hdr, ==, hdr); 2698 boolean_t hdr_compressed = 2699 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF; 2700 boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0; 2701 return (!ARC_BUF_ENCRYPTED(buf) && 2702 buf_compressed == hdr_compressed && 2703 hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS && 2704 !HDR_SHARED_DATA(hdr) && 2705 (ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf))); 2706 } 2707 2708 /* 2709 * Allocate a buf for this hdr. If you care about the data that's in the hdr, 2710 * or if you want a compressed buffer, pass those flags in. Returns 0 if the 2711 * copy was made successfully, or an error code otherwise. 2712 */ 2713 static int 2714 arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb, 2715 const void *tag, boolean_t encrypted, boolean_t compressed, 2716 boolean_t noauth, boolean_t fill, arc_buf_t **ret) 2717 { 2718 arc_buf_t *buf; 2719 arc_fill_flags_t flags = ARC_FILL_LOCKED; 2720 2721 ASSERT(HDR_HAS_L1HDR(hdr)); 2722 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 2723 VERIFY(hdr->b_type == ARC_BUFC_DATA || 2724 hdr->b_type == ARC_BUFC_METADATA); 2725 ASSERT3P(ret, !=, NULL); 2726 ASSERT3P(*ret, ==, NULL); 2727 IMPLY(encrypted, compressed); 2728 2729 buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE); 2730 buf->b_hdr = hdr; 2731 buf->b_data = NULL; 2732 buf->b_next = hdr->b_l1hdr.b_buf; 2733 buf->b_flags = 0; 2734 2735 add_reference(hdr, tag); 2736 2737 /* 2738 * We're about to change the hdr's b_flags. We must either 2739 * hold the hash_lock or be undiscoverable. 2740 */ 2741 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2742 2743 /* 2744 * Only honor requests for compressed bufs if the hdr is actually 2745 * compressed. This must be overridden if the buffer is encrypted since 2746 * encrypted buffers cannot be decompressed. 2747 */ 2748 if (encrypted) { 2749 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2750 buf->b_flags |= ARC_BUF_FLAG_ENCRYPTED; 2751 flags |= ARC_FILL_COMPRESSED | ARC_FILL_ENCRYPTED; 2752 } else if (compressed && 2753 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 2754 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2755 flags |= ARC_FILL_COMPRESSED; 2756 } 2757 2758 if (noauth) { 2759 ASSERT0(encrypted); 2760 flags |= ARC_FILL_NOAUTH; 2761 } 2762 2763 /* 2764 * If the hdr's data can be shared then we share the data buffer and 2765 * set the appropriate bit in the hdr's b_flags to indicate the hdr is 2766 * sharing it's b_pabd with the arc_buf_t. Otherwise, we allocate a new 2767 * buffer to store the buf's data. 2768 * 2769 * There are two additional restrictions here because we're sharing 2770 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be 2771 * actively involved in an L2ARC write, because if this buf is used by 2772 * an arc_write() then the hdr's data buffer will be released when the 2773 * write completes, even though the L2ARC write might still be using it. 2774 * Second, the hdr's ABD must be linear so that the buf's user doesn't 2775 * need to be ABD-aware. It must be allocated via 2776 * zio_[data_]buf_alloc(), not as a page, because we need to be able 2777 * to abd_release_ownership_of_buf(), which isn't allowed on "linear 2778 * page" buffers because the ABD code needs to handle freeing them 2779 * specially. 2780 */ 2781 boolean_t can_share = arc_can_share(hdr, buf) && 2782 !HDR_L2_WRITING(hdr) && 2783 hdr->b_l1hdr.b_pabd != NULL && 2784 abd_is_linear(hdr->b_l1hdr.b_pabd) && 2785 !abd_is_linear_page(hdr->b_l1hdr.b_pabd); 2786 2787 /* Set up b_data and sharing */ 2788 if (can_share) { 2789 buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd); 2790 buf->b_flags |= ARC_BUF_FLAG_SHARED; 2791 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 2792 } else { 2793 buf->b_data = 2794 arc_get_data_buf(hdr, arc_buf_size(buf), buf); 2795 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 2796 } 2797 VERIFY3P(buf->b_data, !=, NULL); 2798 2799 hdr->b_l1hdr.b_buf = buf; 2800 2801 /* 2802 * If the user wants the data from the hdr, we need to either copy or 2803 * decompress the data. 2804 */ 2805 if (fill) { 2806 ASSERT3P(zb, !=, NULL); 2807 return (arc_buf_fill(buf, spa, zb, flags)); 2808 } 2809 2810 return (0); 2811 } 2812 2813 static const char *arc_onloan_tag = "onloan"; 2814 2815 static inline void 2816 arc_loaned_bytes_update(int64_t delta) 2817 { 2818 atomic_add_64(&arc_loaned_bytes, delta); 2819 2820 /* assert that it did not wrap around */ 2821 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 2822 } 2823 2824 /* 2825 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in 2826 * flight data by arc_tempreserve_space() until they are "returned". Loaned 2827 * buffers must be returned to the arc before they can be used by the DMU or 2828 * freed. 2829 */ 2830 arc_buf_t * 2831 arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size) 2832 { 2833 arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag, 2834 is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size); 2835 2836 arc_loaned_bytes_update(arc_buf_size(buf)); 2837 2838 return (buf); 2839 } 2840 2841 arc_buf_t * 2842 arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize, 2843 enum zio_compress compression_type, uint8_t complevel) 2844 { 2845 arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag, 2846 psize, lsize, compression_type, complevel); 2847 2848 arc_loaned_bytes_update(arc_buf_size(buf)); 2849 2850 return (buf); 2851 } 2852 2853 arc_buf_t * 2854 arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder, 2855 const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, 2856 dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 2857 enum zio_compress compression_type, uint8_t complevel) 2858 { 2859 arc_buf_t *buf = arc_alloc_raw_buf(spa, arc_onloan_tag, dsobj, 2860 byteorder, salt, iv, mac, ot, psize, lsize, compression_type, 2861 complevel); 2862 2863 atomic_add_64(&arc_loaned_bytes, psize); 2864 return (buf); 2865 } 2866 2867 2868 /* 2869 * Return a loaned arc buffer to the arc. 2870 */ 2871 void 2872 arc_return_buf(arc_buf_t *buf, const void *tag) 2873 { 2874 arc_buf_hdr_t *hdr = buf->b_hdr; 2875 2876 ASSERT3P(buf->b_data, !=, NULL); 2877 ASSERT(HDR_HAS_L1HDR(hdr)); 2878 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag); 2879 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2880 2881 arc_loaned_bytes_update(-arc_buf_size(buf)); 2882 } 2883 2884 /* Detach an arc_buf from a dbuf (tag) */ 2885 void 2886 arc_loan_inuse_buf(arc_buf_t *buf, const void *tag) 2887 { 2888 arc_buf_hdr_t *hdr = buf->b_hdr; 2889 2890 ASSERT3P(buf->b_data, !=, NULL); 2891 ASSERT(HDR_HAS_L1HDR(hdr)); 2892 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2893 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag); 2894 2895 arc_loaned_bytes_update(arc_buf_size(buf)); 2896 } 2897 2898 static void 2899 l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type) 2900 { 2901 l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP); 2902 2903 df->l2df_abd = abd; 2904 df->l2df_size = size; 2905 df->l2df_type = type; 2906 mutex_enter(&l2arc_free_on_write_mtx); 2907 list_insert_head(l2arc_free_on_write, df); 2908 mutex_exit(&l2arc_free_on_write_mtx); 2909 } 2910 2911 static void 2912 arc_hdr_free_on_write(arc_buf_hdr_t *hdr, boolean_t free_rdata) 2913 { 2914 arc_state_t *state = hdr->b_l1hdr.b_state; 2915 arc_buf_contents_t type = arc_buf_type(hdr); 2916 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 2917 2918 /* protected by hash lock, if in the hash table */ 2919 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2920 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2921 ASSERT(state != arc_anon && state != arc_l2c_only); 2922 2923 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2924 size, hdr); 2925 } 2926 (void) zfs_refcount_remove_many(&state->arcs_size[type], size, hdr); 2927 if (type == ARC_BUFC_METADATA) { 2928 arc_space_return(size, ARC_SPACE_META); 2929 } else { 2930 ASSERT(type == ARC_BUFC_DATA); 2931 arc_space_return(size, ARC_SPACE_DATA); 2932 } 2933 2934 if (free_rdata) { 2935 l2arc_free_abd_on_write(hdr->b_crypt_hdr.b_rabd, size, type); 2936 } else { 2937 l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type); 2938 } 2939 } 2940 2941 /* 2942 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the 2943 * data buffer, we transfer the refcount ownership to the hdr and update 2944 * the appropriate kstats. 2945 */ 2946 static void 2947 arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2948 { 2949 ASSERT(arc_can_share(hdr, buf)); 2950 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2951 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 2952 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2953 2954 /* 2955 * Start sharing the data buffer. We transfer the 2956 * refcount ownership to the hdr since it always owns 2957 * the refcount whenever an arc_buf_t is shared. 2958 */ 2959 zfs_refcount_transfer_ownership_many( 2960 &hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)], 2961 arc_hdr_size(hdr), buf, hdr); 2962 hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf)); 2963 abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd, 2964 HDR_ISTYPE_METADATA(hdr)); 2965 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 2966 buf->b_flags |= ARC_BUF_FLAG_SHARED; 2967 2968 /* 2969 * Since we've transferred ownership to the hdr we need 2970 * to increment its compressed and uncompressed kstats and 2971 * decrement the overhead size. 2972 */ 2973 ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr)); 2974 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 2975 ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf)); 2976 } 2977 2978 static void 2979 arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2980 { 2981 ASSERT(arc_buf_is_shared(buf)); 2982 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 2983 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2984 2985 /* 2986 * We are no longer sharing this buffer so we need 2987 * to transfer its ownership to the rightful owner. 2988 */ 2989 zfs_refcount_transfer_ownership_many( 2990 &hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)], 2991 arc_hdr_size(hdr), hdr, buf); 2992 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 2993 abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd); 2994 abd_free(hdr->b_l1hdr.b_pabd); 2995 hdr->b_l1hdr.b_pabd = NULL; 2996 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 2997 2998 /* 2999 * Since the buffer is no longer shared between 3000 * the arc buf and the hdr, count it as overhead. 3001 */ 3002 ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr)); 3003 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3004 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 3005 } 3006 3007 /* 3008 * Remove an arc_buf_t from the hdr's buf list and return the last 3009 * arc_buf_t on the list. If no buffers remain on the list then return 3010 * NULL. 3011 */ 3012 static arc_buf_t * 3013 arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3014 { 3015 ASSERT(HDR_HAS_L1HDR(hdr)); 3016 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3017 3018 arc_buf_t **bufp = &hdr->b_l1hdr.b_buf; 3019 arc_buf_t *lastbuf = NULL; 3020 3021 /* 3022 * Remove the buf from the hdr list and locate the last 3023 * remaining buffer on the list. 3024 */ 3025 while (*bufp != NULL) { 3026 if (*bufp == buf) 3027 *bufp = buf->b_next; 3028 3029 /* 3030 * If we've removed a buffer in the middle of 3031 * the list then update the lastbuf and update 3032 * bufp. 3033 */ 3034 if (*bufp != NULL) { 3035 lastbuf = *bufp; 3036 bufp = &(*bufp)->b_next; 3037 } 3038 } 3039 buf->b_next = NULL; 3040 ASSERT3P(lastbuf, !=, buf); 3041 IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf)); 3042 3043 return (lastbuf); 3044 } 3045 3046 /* 3047 * Free up buf->b_data and pull the arc_buf_t off of the arc_buf_hdr_t's 3048 * list and free it. 3049 */ 3050 static void 3051 arc_buf_destroy_impl(arc_buf_t *buf) 3052 { 3053 arc_buf_hdr_t *hdr = buf->b_hdr; 3054 3055 /* 3056 * Free up the data associated with the buf but only if we're not 3057 * sharing this with the hdr. If we are sharing it with the hdr, the 3058 * hdr is responsible for doing the free. 3059 */ 3060 if (buf->b_data != NULL) { 3061 /* 3062 * We're about to change the hdr's b_flags. We must either 3063 * hold the hash_lock or be undiscoverable. 3064 */ 3065 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3066 3067 arc_cksum_verify(buf); 3068 arc_buf_unwatch(buf); 3069 3070 if (ARC_BUF_SHARED(buf)) { 3071 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 3072 } else { 3073 ASSERT(!arc_buf_is_shared(buf)); 3074 uint64_t size = arc_buf_size(buf); 3075 arc_free_data_buf(hdr, buf->b_data, size, buf); 3076 ARCSTAT_INCR(arcstat_overhead_size, -size); 3077 } 3078 buf->b_data = NULL; 3079 3080 /* 3081 * If we have no more encrypted buffers and we've already 3082 * gotten a copy of the decrypted data we can free b_rabd 3083 * to save some space. 3084 */ 3085 if (ARC_BUF_ENCRYPTED(buf) && HDR_HAS_RABD(hdr) && 3086 hdr->b_l1hdr.b_pabd != NULL && !HDR_IO_IN_PROGRESS(hdr)) { 3087 arc_buf_t *b; 3088 for (b = hdr->b_l1hdr.b_buf; b; b = b->b_next) { 3089 if (b != buf && ARC_BUF_ENCRYPTED(b)) 3090 break; 3091 } 3092 if (b == NULL) 3093 arc_hdr_free_abd(hdr, B_TRUE); 3094 } 3095 } 3096 3097 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 3098 3099 if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) { 3100 /* 3101 * If the current arc_buf_t is sharing its data buffer with the 3102 * hdr, then reassign the hdr's b_pabd to share it with the new 3103 * buffer at the end of the list. The shared buffer is always 3104 * the last one on the hdr's buffer list. 3105 * 3106 * There is an equivalent case for compressed bufs, but since 3107 * they aren't guaranteed to be the last buf in the list and 3108 * that is an exceedingly rare case, we just allow that space be 3109 * wasted temporarily. We must also be careful not to share 3110 * encrypted buffers, since they cannot be shared. 3111 */ 3112 if (lastbuf != NULL && !ARC_BUF_ENCRYPTED(lastbuf)) { 3113 /* Only one buf can be shared at once */ 3114 ASSERT(!arc_buf_is_shared(lastbuf)); 3115 /* hdr is uncompressed so can't have compressed buf */ 3116 ASSERT(!ARC_BUF_COMPRESSED(lastbuf)); 3117 3118 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3119 arc_hdr_free_abd(hdr, B_FALSE); 3120 3121 /* 3122 * We must setup a new shared block between the 3123 * last buffer and the hdr. The data would have 3124 * been allocated by the arc buf so we need to transfer 3125 * ownership to the hdr since it's now being shared. 3126 */ 3127 arc_share_buf(hdr, lastbuf); 3128 } 3129 } else if (HDR_SHARED_DATA(hdr)) { 3130 /* 3131 * Uncompressed shared buffers are always at the end 3132 * of the list. Compressed buffers don't have the 3133 * same requirements. This makes it hard to 3134 * simply assert that the lastbuf is shared so 3135 * we rely on the hdr's compression flags to determine 3136 * if we have a compressed, shared buffer. 3137 */ 3138 ASSERT3P(lastbuf, !=, NULL); 3139 ASSERT(arc_buf_is_shared(lastbuf) || 3140 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 3141 } 3142 3143 /* 3144 * Free the checksum if we're removing the last uncompressed buf from 3145 * this hdr. 3146 */ 3147 if (!arc_hdr_has_uncompressed_buf(hdr)) { 3148 arc_cksum_free(hdr); 3149 } 3150 3151 /* clean up the buf */ 3152 buf->b_hdr = NULL; 3153 kmem_cache_free(buf_cache, buf); 3154 } 3155 3156 static void 3157 arc_hdr_alloc_abd(arc_buf_hdr_t *hdr, int alloc_flags) 3158 { 3159 uint64_t size; 3160 boolean_t alloc_rdata = ((alloc_flags & ARC_HDR_ALLOC_RDATA) != 0); 3161 3162 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 3163 ASSERT(HDR_HAS_L1HDR(hdr)); 3164 ASSERT(!HDR_SHARED_DATA(hdr) || alloc_rdata); 3165 IMPLY(alloc_rdata, HDR_PROTECTED(hdr)); 3166 3167 if (alloc_rdata) { 3168 size = HDR_GET_PSIZE(hdr); 3169 ASSERT3P(hdr->b_crypt_hdr.b_rabd, ==, NULL); 3170 hdr->b_crypt_hdr.b_rabd = arc_get_data_abd(hdr, size, hdr, 3171 alloc_flags); 3172 ASSERT3P(hdr->b_crypt_hdr.b_rabd, !=, NULL); 3173 ARCSTAT_INCR(arcstat_raw_size, size); 3174 } else { 3175 size = arc_hdr_size(hdr); 3176 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3177 hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, size, hdr, 3178 alloc_flags); 3179 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3180 } 3181 3182 ARCSTAT_INCR(arcstat_compressed_size, size); 3183 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3184 } 3185 3186 static void 3187 arc_hdr_free_abd(arc_buf_hdr_t *hdr, boolean_t free_rdata) 3188 { 3189 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 3190 3191 ASSERT(HDR_HAS_L1HDR(hdr)); 3192 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 3193 IMPLY(free_rdata, HDR_HAS_RABD(hdr)); 3194 3195 /* 3196 * If the hdr is currently being written to the l2arc then 3197 * we defer freeing the data by adding it to the l2arc_free_on_write 3198 * list. The l2arc will free the data once it's finished 3199 * writing it to the l2arc device. 3200 */ 3201 if (HDR_L2_WRITING(hdr)) { 3202 arc_hdr_free_on_write(hdr, free_rdata); 3203 ARCSTAT_BUMP(arcstat_l2_free_on_write); 3204 } else if (free_rdata) { 3205 arc_free_data_abd(hdr, hdr->b_crypt_hdr.b_rabd, size, hdr); 3206 } else { 3207 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, size, hdr); 3208 } 3209 3210 if (free_rdata) { 3211 hdr->b_crypt_hdr.b_rabd = NULL; 3212 ARCSTAT_INCR(arcstat_raw_size, -size); 3213 } else { 3214 hdr->b_l1hdr.b_pabd = NULL; 3215 } 3216 3217 if (hdr->b_l1hdr.b_pabd == NULL && !HDR_HAS_RABD(hdr)) 3218 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 3219 3220 ARCSTAT_INCR(arcstat_compressed_size, -size); 3221 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3222 } 3223 3224 /* 3225 * Allocate empty anonymous ARC header. The header will get its identity 3226 * assigned and buffers attached later as part of read or write operations. 3227 * 3228 * In case of read arc_read() assigns header its identify (b_dva + b_birth), 3229 * inserts it into ARC hash to become globally visible and allocates physical 3230 * (b_pabd) or raw (b_rabd) ABD buffer to read into from disk. On disk read 3231 * completion arc_read_done() allocates ARC buffer(s) as needed, potentially 3232 * sharing one of them with the physical ABD buffer. 3233 * 3234 * In case of write arc_alloc_buf() allocates ARC buffer to be filled with 3235 * data. Then after compression and/or encryption arc_write_ready() allocates 3236 * and fills (or potentially shares) physical (b_pabd) or raw (b_rabd) ABD 3237 * buffer. On disk write completion arc_write_done() assigns the header its 3238 * new identity (b_dva + b_birth) and inserts into ARC hash. 3239 * 3240 * In case of partial overwrite the old data is read first as described. Then 3241 * arc_release() either allocates new anonymous ARC header and moves the ARC 3242 * buffer to it, or reuses the old ARC header by discarding its identity and 3243 * removing it from ARC hash. After buffer modification normal write process 3244 * follows as described. 3245 */ 3246 static arc_buf_hdr_t * 3247 arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize, 3248 boolean_t protected, enum zio_compress compression_type, uint8_t complevel, 3249 arc_buf_contents_t type) 3250 { 3251 arc_buf_hdr_t *hdr; 3252 3253 VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA); 3254 hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE); 3255 3256 ASSERT(HDR_EMPTY(hdr)); 3257 #ifdef ZFS_DEBUG 3258 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3259 #endif 3260 HDR_SET_PSIZE(hdr, psize); 3261 HDR_SET_LSIZE(hdr, lsize); 3262 hdr->b_spa = spa; 3263 hdr->b_type = type; 3264 hdr->b_flags = 0; 3265 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR); 3266 arc_hdr_set_compress(hdr, compression_type); 3267 hdr->b_complevel = complevel; 3268 if (protected) 3269 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 3270 3271 hdr->b_l1hdr.b_state = arc_anon; 3272 hdr->b_l1hdr.b_arc_access = 0; 3273 hdr->b_l1hdr.b_mru_hits = 0; 3274 hdr->b_l1hdr.b_mru_ghost_hits = 0; 3275 hdr->b_l1hdr.b_mfu_hits = 0; 3276 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 3277 hdr->b_l1hdr.b_buf = NULL; 3278 3279 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3280 3281 return (hdr); 3282 } 3283 3284 /* 3285 * Transition between the two allocation states for the arc_buf_hdr struct. 3286 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without 3287 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller 3288 * version is used when a cache buffer is only in the L2ARC in order to reduce 3289 * memory usage. 3290 */ 3291 static arc_buf_hdr_t * 3292 arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new) 3293 { 3294 ASSERT(HDR_HAS_L2HDR(hdr)); 3295 3296 arc_buf_hdr_t *nhdr; 3297 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3298 3299 ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) || 3300 (old == hdr_l2only_cache && new == hdr_full_cache)); 3301 3302 nhdr = kmem_cache_alloc(new, KM_PUSHPAGE); 3303 3304 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3305 buf_hash_remove(hdr); 3306 3307 memcpy(nhdr, hdr, HDR_L2ONLY_SIZE); 3308 3309 if (new == hdr_full_cache) { 3310 arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3311 /* 3312 * arc_access and arc_change_state need to be aware that a 3313 * header has just come out of L2ARC, so we set its state to 3314 * l2c_only even though it's about to change. 3315 */ 3316 nhdr->b_l1hdr.b_state = arc_l2c_only; 3317 3318 /* Verify previous threads set to NULL before freeing */ 3319 ASSERT3P(nhdr->b_l1hdr.b_pabd, ==, NULL); 3320 ASSERT(!HDR_HAS_RABD(hdr)); 3321 } else { 3322 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3323 #ifdef ZFS_DEBUG 3324 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3325 #endif 3326 3327 /* 3328 * If we've reached here, We must have been called from 3329 * arc_evict_hdr(), as such we should have already been 3330 * removed from any ghost list we were previously on 3331 * (which protects us from racing with arc_evict_state), 3332 * thus no locking is needed during this check. 3333 */ 3334 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3335 3336 /* 3337 * A buffer must not be moved into the arc_l2c_only 3338 * state if it's not finished being written out to the 3339 * l2arc device. Otherwise, the b_l1hdr.b_pabd field 3340 * might try to be accessed, even though it was removed. 3341 */ 3342 VERIFY(!HDR_L2_WRITING(hdr)); 3343 VERIFY3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3344 ASSERT(!HDR_HAS_RABD(hdr)); 3345 3346 arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3347 } 3348 /* 3349 * The header has been reallocated so we need to re-insert it into any 3350 * lists it was on. 3351 */ 3352 (void) buf_hash_insert(nhdr, NULL); 3353 3354 ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node)); 3355 3356 mutex_enter(&dev->l2ad_mtx); 3357 3358 /* 3359 * We must place the realloc'ed header back into the list at 3360 * the same spot. Otherwise, if it's placed earlier in the list, 3361 * l2arc_write_buffers() could find it during the function's 3362 * write phase, and try to write it out to the l2arc. 3363 */ 3364 list_insert_after(&dev->l2ad_buflist, hdr, nhdr); 3365 list_remove(&dev->l2ad_buflist, hdr); 3366 3367 mutex_exit(&dev->l2ad_mtx); 3368 3369 /* 3370 * Since we're using the pointer address as the tag when 3371 * incrementing and decrementing the l2ad_alloc refcount, we 3372 * must remove the old pointer (that we're about to destroy) and 3373 * add the new pointer to the refcount. Otherwise we'd remove 3374 * the wrong pointer address when calling arc_hdr_destroy() later. 3375 */ 3376 3377 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 3378 arc_hdr_size(hdr), hdr); 3379 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 3380 arc_hdr_size(nhdr), nhdr); 3381 3382 buf_discard_identity(hdr); 3383 kmem_cache_free(old, hdr); 3384 3385 return (nhdr); 3386 } 3387 3388 /* 3389 * This function is used by the send / receive code to convert a newly 3390 * allocated arc_buf_t to one that is suitable for a raw encrypted write. It 3391 * is also used to allow the root objset block to be updated without altering 3392 * its embedded MACs. Both block types will always be uncompressed so we do not 3393 * have to worry about compression type or psize. 3394 */ 3395 void 3396 arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder, 3397 dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv, 3398 const uint8_t *mac) 3399 { 3400 arc_buf_hdr_t *hdr = buf->b_hdr; 3401 3402 ASSERT(ot == DMU_OT_DNODE || ot == DMU_OT_OBJSET); 3403 ASSERT(HDR_HAS_L1HDR(hdr)); 3404 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3405 3406 buf->b_flags |= (ARC_BUF_FLAG_COMPRESSED | ARC_BUF_FLAG_ENCRYPTED); 3407 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 3408 hdr->b_crypt_hdr.b_dsobj = dsobj; 3409 hdr->b_crypt_hdr.b_ot = ot; 3410 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3411 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3412 if (!arc_hdr_has_uncompressed_buf(hdr)) 3413 arc_cksum_free(hdr); 3414 3415 if (salt != NULL) 3416 memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 3417 if (iv != NULL) 3418 memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 3419 if (mac != NULL) 3420 memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 3421 } 3422 3423 /* 3424 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller. 3425 * The buf is returned thawed since we expect the consumer to modify it. 3426 */ 3427 arc_buf_t * 3428 arc_alloc_buf(spa_t *spa, const void *tag, arc_buf_contents_t type, 3429 int32_t size) 3430 { 3431 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size, 3432 B_FALSE, ZIO_COMPRESS_OFF, 0, type); 3433 3434 arc_buf_t *buf = NULL; 3435 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, B_FALSE, 3436 B_FALSE, B_FALSE, &buf)); 3437 arc_buf_thaw(buf); 3438 3439 return (buf); 3440 } 3441 3442 /* 3443 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this 3444 * for bufs containing metadata. 3445 */ 3446 arc_buf_t * 3447 arc_alloc_compressed_buf(spa_t *spa, const void *tag, uint64_t psize, 3448 uint64_t lsize, enum zio_compress compression_type, uint8_t complevel) 3449 { 3450 ASSERT3U(lsize, >, 0); 3451 ASSERT3U(lsize, >=, psize); 3452 ASSERT3U(compression_type, >, ZIO_COMPRESS_OFF); 3453 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3454 3455 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 3456 B_FALSE, compression_type, complevel, ARC_BUFC_DATA); 3457 3458 arc_buf_t *buf = NULL; 3459 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, 3460 B_TRUE, B_FALSE, B_FALSE, &buf)); 3461 arc_buf_thaw(buf); 3462 3463 /* 3464 * To ensure that the hdr has the correct data in it if we call 3465 * arc_untransform() on this buf before it's been written to disk, 3466 * it's easiest if we just set up sharing between the buf and the hdr. 3467 */ 3468 arc_share_buf(hdr, buf); 3469 3470 return (buf); 3471 } 3472 3473 arc_buf_t * 3474 arc_alloc_raw_buf(spa_t *spa, const void *tag, uint64_t dsobj, 3475 boolean_t byteorder, const uint8_t *salt, const uint8_t *iv, 3476 const uint8_t *mac, dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 3477 enum zio_compress compression_type, uint8_t complevel) 3478 { 3479 arc_buf_hdr_t *hdr; 3480 arc_buf_t *buf; 3481 arc_buf_contents_t type = DMU_OT_IS_METADATA(ot) ? 3482 ARC_BUFC_METADATA : ARC_BUFC_DATA; 3483 3484 ASSERT3U(lsize, >, 0); 3485 ASSERT3U(lsize, >=, psize); 3486 ASSERT3U(compression_type, >=, ZIO_COMPRESS_OFF); 3487 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3488 3489 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, B_TRUE, 3490 compression_type, complevel, type); 3491 3492 hdr->b_crypt_hdr.b_dsobj = dsobj; 3493 hdr->b_crypt_hdr.b_ot = ot; 3494 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3495 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3496 memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 3497 memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 3498 memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 3499 3500 /* 3501 * This buffer will be considered encrypted even if the ot is not an 3502 * encrypted type. It will become authenticated instead in 3503 * arc_write_ready(). 3504 */ 3505 buf = NULL; 3506 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_TRUE, B_TRUE, 3507 B_FALSE, B_FALSE, &buf)); 3508 arc_buf_thaw(buf); 3509 3510 return (buf); 3511 } 3512 3513 static void 3514 l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 3515 boolean_t state_only) 3516 { 3517 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3518 l2arc_dev_t *dev = l2hdr->b_dev; 3519 uint64_t lsize = HDR_GET_LSIZE(hdr); 3520 uint64_t psize = HDR_GET_PSIZE(hdr); 3521 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3522 arc_buf_contents_t type = hdr->b_type; 3523 int64_t lsize_s; 3524 int64_t psize_s; 3525 int64_t asize_s; 3526 3527 if (incr) { 3528 lsize_s = lsize; 3529 psize_s = psize; 3530 asize_s = asize; 3531 } else { 3532 lsize_s = -lsize; 3533 psize_s = -psize; 3534 asize_s = -asize; 3535 } 3536 3537 /* If the buffer is a prefetch, count it as such. */ 3538 if (HDR_PREFETCH(hdr)) { 3539 ARCSTAT_INCR(arcstat_l2_prefetch_asize, asize_s); 3540 } else { 3541 /* 3542 * We use the value stored in the L2 header upon initial 3543 * caching in L2ARC. This value will be updated in case 3544 * an MRU/MRU_ghost buffer transitions to MFU but the L2ARC 3545 * metadata (log entry) cannot currently be updated. Having 3546 * the ARC state in the L2 header solves the problem of a 3547 * possibly absent L1 header (apparent in buffers restored 3548 * from persistent L2ARC). 3549 */ 3550 switch (hdr->b_l2hdr.b_arcs_state) { 3551 case ARC_STATE_MRU_GHOST: 3552 case ARC_STATE_MRU: 3553 ARCSTAT_INCR(arcstat_l2_mru_asize, asize_s); 3554 break; 3555 case ARC_STATE_MFU_GHOST: 3556 case ARC_STATE_MFU: 3557 ARCSTAT_INCR(arcstat_l2_mfu_asize, asize_s); 3558 break; 3559 default: 3560 break; 3561 } 3562 } 3563 3564 if (state_only) 3565 return; 3566 3567 ARCSTAT_INCR(arcstat_l2_psize, psize_s); 3568 ARCSTAT_INCR(arcstat_l2_lsize, lsize_s); 3569 3570 switch (type) { 3571 case ARC_BUFC_DATA: 3572 ARCSTAT_INCR(arcstat_l2_bufc_data_asize, asize_s); 3573 break; 3574 case ARC_BUFC_METADATA: 3575 ARCSTAT_INCR(arcstat_l2_bufc_metadata_asize, asize_s); 3576 break; 3577 default: 3578 break; 3579 } 3580 } 3581 3582 3583 static void 3584 arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr) 3585 { 3586 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3587 l2arc_dev_t *dev = l2hdr->b_dev; 3588 uint64_t psize = HDR_GET_PSIZE(hdr); 3589 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3590 3591 ASSERT(MUTEX_HELD(&dev->l2ad_mtx)); 3592 ASSERT(HDR_HAS_L2HDR(hdr)); 3593 3594 list_remove(&dev->l2ad_buflist, hdr); 3595 3596 l2arc_hdr_arcstats_decrement(hdr); 3597 vdev_space_update(dev->l2ad_vdev, -asize, 0, 0); 3598 3599 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), 3600 hdr); 3601 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 3602 } 3603 3604 static void 3605 arc_hdr_destroy(arc_buf_hdr_t *hdr) 3606 { 3607 if (HDR_HAS_L1HDR(hdr)) { 3608 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3609 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3610 } 3611 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3612 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 3613 3614 if (HDR_HAS_L2HDR(hdr)) { 3615 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3616 boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx); 3617 3618 if (!buflist_held) 3619 mutex_enter(&dev->l2ad_mtx); 3620 3621 /* 3622 * Even though we checked this conditional above, we 3623 * need to check this again now that we have the 3624 * l2ad_mtx. This is because we could be racing with 3625 * another thread calling l2arc_evict() which might have 3626 * destroyed this header's L2 portion as we were waiting 3627 * to acquire the l2ad_mtx. If that happens, we don't 3628 * want to re-destroy the header's L2 portion. 3629 */ 3630 if (HDR_HAS_L2HDR(hdr)) { 3631 3632 if (!HDR_EMPTY(hdr)) 3633 buf_discard_identity(hdr); 3634 3635 arc_hdr_l2hdr_destroy(hdr); 3636 } 3637 3638 if (!buflist_held) 3639 mutex_exit(&dev->l2ad_mtx); 3640 } 3641 3642 /* 3643 * The header's identify can only be safely discarded once it is no 3644 * longer discoverable. This requires removing it from the hash table 3645 * and the l2arc header list. After this point the hash lock can not 3646 * be used to protect the header. 3647 */ 3648 if (!HDR_EMPTY(hdr)) 3649 buf_discard_identity(hdr); 3650 3651 if (HDR_HAS_L1HDR(hdr)) { 3652 arc_cksum_free(hdr); 3653 3654 while (hdr->b_l1hdr.b_buf != NULL) 3655 arc_buf_destroy_impl(hdr->b_l1hdr.b_buf); 3656 3657 if (hdr->b_l1hdr.b_pabd != NULL) 3658 arc_hdr_free_abd(hdr, B_FALSE); 3659 3660 if (HDR_HAS_RABD(hdr)) 3661 arc_hdr_free_abd(hdr, B_TRUE); 3662 } 3663 3664 ASSERT3P(hdr->b_hash_next, ==, NULL); 3665 if (HDR_HAS_L1HDR(hdr)) { 3666 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3667 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 3668 #ifdef ZFS_DEBUG 3669 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3670 #endif 3671 kmem_cache_free(hdr_full_cache, hdr); 3672 } else { 3673 kmem_cache_free(hdr_l2only_cache, hdr); 3674 } 3675 } 3676 3677 void 3678 arc_buf_destroy(arc_buf_t *buf, const void *tag) 3679 { 3680 arc_buf_hdr_t *hdr = buf->b_hdr; 3681 3682 if (hdr->b_l1hdr.b_state == arc_anon) { 3683 ASSERT3P(hdr->b_l1hdr.b_buf, ==, buf); 3684 ASSERT(ARC_BUF_LAST(buf)); 3685 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3686 VERIFY0(remove_reference(hdr, tag)); 3687 return; 3688 } 3689 3690 kmutex_t *hash_lock = HDR_LOCK(hdr); 3691 mutex_enter(hash_lock); 3692 3693 ASSERT3P(hdr, ==, buf->b_hdr); 3694 ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL); 3695 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 3696 ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon); 3697 ASSERT3P(buf->b_data, !=, NULL); 3698 3699 arc_buf_destroy_impl(buf); 3700 (void) remove_reference(hdr, tag); 3701 mutex_exit(hash_lock); 3702 } 3703 3704 /* 3705 * Evict the arc_buf_hdr that is provided as a parameter. The resultant 3706 * state of the header is dependent on its state prior to entering this 3707 * function. The following transitions are possible: 3708 * 3709 * - arc_mru -> arc_mru_ghost 3710 * - arc_mfu -> arc_mfu_ghost 3711 * - arc_mru_ghost -> arc_l2c_only 3712 * - arc_mru_ghost -> deleted 3713 * - arc_mfu_ghost -> arc_l2c_only 3714 * - arc_mfu_ghost -> deleted 3715 * - arc_uncached -> deleted 3716 * 3717 * Return total size of evicted data buffers for eviction progress tracking. 3718 * When evicting from ghost states return logical buffer size to make eviction 3719 * progress at the same (or at least comparable) rate as from non-ghost states. 3720 * 3721 * Return *real_evicted for actual ARC size reduction to wake up threads 3722 * waiting for it. For non-ghost states it includes size of evicted data 3723 * buffers (the headers are not freed there). For ghost states it includes 3724 * only the evicted headers size. 3725 */ 3726 static int64_t 3727 arc_evict_hdr(arc_buf_hdr_t *hdr, uint64_t *real_evicted) 3728 { 3729 arc_state_t *evicted_state, *state; 3730 int64_t bytes_evicted = 0; 3731 uint_t min_lifetime = HDR_PRESCIENT_PREFETCH(hdr) ? 3732 arc_min_prescient_prefetch_ms : arc_min_prefetch_ms; 3733 3734 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3735 ASSERT(HDR_HAS_L1HDR(hdr)); 3736 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3737 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3738 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3739 3740 *real_evicted = 0; 3741 state = hdr->b_l1hdr.b_state; 3742 if (GHOST_STATE(state)) { 3743 3744 /* 3745 * l2arc_write_buffers() relies on a header's L1 portion 3746 * (i.e. its b_pabd field) during it's write phase. 3747 * Thus, we cannot push a header onto the arc_l2c_only 3748 * state (removing its L1 piece) until the header is 3749 * done being written to the l2arc. 3750 */ 3751 if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) { 3752 ARCSTAT_BUMP(arcstat_evict_l2_skip); 3753 return (bytes_evicted); 3754 } 3755 3756 ARCSTAT_BUMP(arcstat_deleted); 3757 bytes_evicted += HDR_GET_LSIZE(hdr); 3758 3759 DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr); 3760 3761 if (HDR_HAS_L2HDR(hdr)) { 3762 ASSERT(hdr->b_l1hdr.b_pabd == NULL); 3763 ASSERT(!HDR_HAS_RABD(hdr)); 3764 /* 3765 * This buffer is cached on the 2nd Level ARC; 3766 * don't destroy the header. 3767 */ 3768 arc_change_state(arc_l2c_only, hdr); 3769 /* 3770 * dropping from L1+L2 cached to L2-only, 3771 * realloc to remove the L1 header. 3772 */ 3773 (void) arc_hdr_realloc(hdr, hdr_full_cache, 3774 hdr_l2only_cache); 3775 *real_evicted += HDR_FULL_SIZE - HDR_L2ONLY_SIZE; 3776 } else { 3777 arc_change_state(arc_anon, hdr); 3778 arc_hdr_destroy(hdr); 3779 *real_evicted += HDR_FULL_SIZE; 3780 } 3781 return (bytes_evicted); 3782 } 3783 3784 ASSERT(state == arc_mru || state == arc_mfu || state == arc_uncached); 3785 evicted_state = (state == arc_uncached) ? arc_anon : 3786 ((state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost); 3787 3788 /* prefetch buffers have a minimum lifespan */ 3789 if ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) && 3790 ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access < 3791 MSEC_TO_TICK(min_lifetime)) { 3792 ARCSTAT_BUMP(arcstat_evict_skip); 3793 return (bytes_evicted); 3794 } 3795 3796 if (HDR_HAS_L2HDR(hdr)) { 3797 ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr)); 3798 } else { 3799 if (l2arc_write_eligible(hdr->b_spa, hdr)) { 3800 ARCSTAT_INCR(arcstat_evict_l2_eligible, 3801 HDR_GET_LSIZE(hdr)); 3802 3803 switch (state->arcs_state) { 3804 case ARC_STATE_MRU: 3805 ARCSTAT_INCR( 3806 arcstat_evict_l2_eligible_mru, 3807 HDR_GET_LSIZE(hdr)); 3808 break; 3809 case ARC_STATE_MFU: 3810 ARCSTAT_INCR( 3811 arcstat_evict_l2_eligible_mfu, 3812 HDR_GET_LSIZE(hdr)); 3813 break; 3814 default: 3815 break; 3816 } 3817 } else { 3818 ARCSTAT_INCR(arcstat_evict_l2_ineligible, 3819 HDR_GET_LSIZE(hdr)); 3820 } 3821 } 3822 3823 bytes_evicted += arc_hdr_size(hdr); 3824 *real_evicted += arc_hdr_size(hdr); 3825 3826 /* 3827 * If this hdr is being evicted and has a compressed buffer then we 3828 * discard it here before we change states. This ensures that the 3829 * accounting is updated correctly in arc_free_data_impl(). 3830 */ 3831 if (hdr->b_l1hdr.b_pabd != NULL) 3832 arc_hdr_free_abd(hdr, B_FALSE); 3833 3834 if (HDR_HAS_RABD(hdr)) 3835 arc_hdr_free_abd(hdr, B_TRUE); 3836 3837 arc_change_state(evicted_state, hdr); 3838 DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr); 3839 if (evicted_state == arc_anon) { 3840 arc_hdr_destroy(hdr); 3841 *real_evicted += HDR_FULL_SIZE; 3842 } else { 3843 ASSERT(HDR_IN_HASH_TABLE(hdr)); 3844 } 3845 3846 return (bytes_evicted); 3847 } 3848 3849 static void 3850 arc_set_need_free(void) 3851 { 3852 ASSERT(MUTEX_HELD(&arc_evict_lock)); 3853 int64_t remaining = arc_free_memory() - arc_sys_free / 2; 3854 arc_evict_waiter_t *aw = list_tail(&arc_evict_waiters); 3855 if (aw == NULL) { 3856 arc_need_free = MAX(-remaining, 0); 3857 } else { 3858 arc_need_free = 3859 MAX(-remaining, (int64_t)(aw->aew_count - arc_evict_count)); 3860 } 3861 } 3862 3863 static uint64_t 3864 arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker, 3865 uint64_t spa, uint64_t bytes) 3866 { 3867 multilist_sublist_t *mls; 3868 uint64_t bytes_evicted = 0, real_evicted = 0; 3869 arc_buf_hdr_t *hdr; 3870 kmutex_t *hash_lock; 3871 uint_t evict_count = zfs_arc_evict_batch_limit; 3872 3873 ASSERT3P(marker, !=, NULL); 3874 3875 mls = multilist_sublist_lock(ml, idx); 3876 3877 for (hdr = multilist_sublist_prev(mls, marker); likely(hdr != NULL); 3878 hdr = multilist_sublist_prev(mls, marker)) { 3879 if ((evict_count == 0) || (bytes_evicted >= bytes)) 3880 break; 3881 3882 /* 3883 * To keep our iteration location, move the marker 3884 * forward. Since we're not holding hdr's hash lock, we 3885 * must be very careful and not remove 'hdr' from the 3886 * sublist. Otherwise, other consumers might mistake the 3887 * 'hdr' as not being on a sublist when they call the 3888 * multilist_link_active() function (they all rely on 3889 * the hash lock protecting concurrent insertions and 3890 * removals). multilist_sublist_move_forward() was 3891 * specifically implemented to ensure this is the case 3892 * (only 'marker' will be removed and re-inserted). 3893 */ 3894 multilist_sublist_move_forward(mls, marker); 3895 3896 /* 3897 * The only case where the b_spa field should ever be 3898 * zero, is the marker headers inserted by 3899 * arc_evict_state(). It's possible for multiple threads 3900 * to be calling arc_evict_state() concurrently (e.g. 3901 * dsl_pool_close() and zio_inject_fault()), so we must 3902 * skip any markers we see from these other threads. 3903 */ 3904 if (hdr->b_spa == 0) 3905 continue; 3906 3907 /* we're only interested in evicting buffers of a certain spa */ 3908 if (spa != 0 && hdr->b_spa != spa) { 3909 ARCSTAT_BUMP(arcstat_evict_skip); 3910 continue; 3911 } 3912 3913 hash_lock = HDR_LOCK(hdr); 3914 3915 /* 3916 * We aren't calling this function from any code path 3917 * that would already be holding a hash lock, so we're 3918 * asserting on this assumption to be defensive in case 3919 * this ever changes. Without this check, it would be 3920 * possible to incorrectly increment arcstat_mutex_miss 3921 * below (e.g. if the code changed such that we called 3922 * this function with a hash lock held). 3923 */ 3924 ASSERT(!MUTEX_HELD(hash_lock)); 3925 3926 if (mutex_tryenter(hash_lock)) { 3927 uint64_t revicted; 3928 uint64_t evicted = arc_evict_hdr(hdr, &revicted); 3929 mutex_exit(hash_lock); 3930 3931 bytes_evicted += evicted; 3932 real_evicted += revicted; 3933 3934 /* 3935 * If evicted is zero, arc_evict_hdr() must have 3936 * decided to skip this header, don't increment 3937 * evict_count in this case. 3938 */ 3939 if (evicted != 0) 3940 evict_count--; 3941 3942 } else { 3943 ARCSTAT_BUMP(arcstat_mutex_miss); 3944 } 3945 } 3946 3947 multilist_sublist_unlock(mls); 3948 3949 /* 3950 * Increment the count of evicted bytes, and wake up any threads that 3951 * are waiting for the count to reach this value. Since the list is 3952 * ordered by ascending aew_count, we pop off the beginning of the 3953 * list until we reach the end, or a waiter that's past the current 3954 * "count". Doing this outside the loop reduces the number of times 3955 * we need to acquire the global arc_evict_lock. 3956 * 3957 * Only wake when there's sufficient free memory in the system 3958 * (specifically, arc_sys_free/2, which by default is a bit more than 3959 * 1/64th of RAM). See the comments in arc_wait_for_eviction(). 3960 */ 3961 mutex_enter(&arc_evict_lock); 3962 arc_evict_count += real_evicted; 3963 3964 if (arc_free_memory() > arc_sys_free / 2) { 3965 arc_evict_waiter_t *aw; 3966 while ((aw = list_head(&arc_evict_waiters)) != NULL && 3967 aw->aew_count <= arc_evict_count) { 3968 list_remove(&arc_evict_waiters, aw); 3969 cv_broadcast(&aw->aew_cv); 3970 } 3971 } 3972 arc_set_need_free(); 3973 mutex_exit(&arc_evict_lock); 3974 3975 /* 3976 * If the ARC size is reduced from arc_c_max to arc_c_min (especially 3977 * if the average cached block is small), eviction can be on-CPU for 3978 * many seconds. To ensure that other threads that may be bound to 3979 * this CPU are able to make progress, make a voluntary preemption 3980 * call here. 3981 */ 3982 kpreempt(KPREEMPT_SYNC); 3983 3984 return (bytes_evicted); 3985 } 3986 3987 /* 3988 * Allocate an array of buffer headers used as placeholders during arc state 3989 * eviction. 3990 */ 3991 static arc_buf_hdr_t ** 3992 arc_state_alloc_markers(int count) 3993 { 3994 arc_buf_hdr_t **markers; 3995 3996 markers = kmem_zalloc(sizeof (*markers) * count, KM_SLEEP); 3997 for (int i = 0; i < count; i++) { 3998 markers[i] = kmem_cache_alloc(hdr_full_cache, KM_SLEEP); 3999 4000 /* 4001 * A b_spa of 0 is used to indicate that this header is 4002 * a marker. This fact is used in arc_evict_state_impl(). 4003 */ 4004 markers[i]->b_spa = 0; 4005 4006 } 4007 return (markers); 4008 } 4009 4010 static void 4011 arc_state_free_markers(arc_buf_hdr_t **markers, int count) 4012 { 4013 for (int i = 0; i < count; i++) 4014 kmem_cache_free(hdr_full_cache, markers[i]); 4015 kmem_free(markers, sizeof (*markers) * count); 4016 } 4017 4018 /* 4019 * Evict buffers from the given arc state, until we've removed the 4020 * specified number of bytes. Move the removed buffers to the 4021 * appropriate evict state. 4022 * 4023 * This function makes a "best effort". It skips over any buffers 4024 * it can't get a hash_lock on, and so, may not catch all candidates. 4025 * It may also return without evicting as much space as requested. 4026 * 4027 * If bytes is specified using the special value ARC_EVICT_ALL, this 4028 * will evict all available (i.e. unlocked and evictable) buffers from 4029 * the given arc state; which is used by arc_flush(). 4030 */ 4031 static uint64_t 4032 arc_evict_state(arc_state_t *state, arc_buf_contents_t type, uint64_t spa, 4033 uint64_t bytes) 4034 { 4035 uint64_t total_evicted = 0; 4036 multilist_t *ml = &state->arcs_list[type]; 4037 int num_sublists; 4038 arc_buf_hdr_t **markers; 4039 4040 num_sublists = multilist_get_num_sublists(ml); 4041 4042 /* 4043 * If we've tried to evict from each sublist, made some 4044 * progress, but still have not hit the target number of bytes 4045 * to evict, we want to keep trying. The markers allow us to 4046 * pick up where we left off for each individual sublist, rather 4047 * than starting from the tail each time. 4048 */ 4049 if (zthr_iscurthread(arc_evict_zthr)) { 4050 markers = arc_state_evict_markers; 4051 ASSERT3S(num_sublists, <=, arc_state_evict_marker_count); 4052 } else { 4053 markers = arc_state_alloc_markers(num_sublists); 4054 } 4055 for (int i = 0; i < num_sublists; i++) { 4056 multilist_sublist_t *mls; 4057 4058 mls = multilist_sublist_lock(ml, i); 4059 multilist_sublist_insert_tail(mls, markers[i]); 4060 multilist_sublist_unlock(mls); 4061 } 4062 4063 /* 4064 * While we haven't hit our target number of bytes to evict, or 4065 * we're evicting all available buffers. 4066 */ 4067 while (total_evicted < bytes) { 4068 int sublist_idx = multilist_get_random_index(ml); 4069 uint64_t scan_evicted = 0; 4070 4071 /* 4072 * Start eviction using a randomly selected sublist, 4073 * this is to try and evenly balance eviction across all 4074 * sublists. Always starting at the same sublist 4075 * (e.g. index 0) would cause evictions to favor certain 4076 * sublists over others. 4077 */ 4078 for (int i = 0; i < num_sublists; i++) { 4079 uint64_t bytes_remaining; 4080 uint64_t bytes_evicted; 4081 4082 if (total_evicted < bytes) 4083 bytes_remaining = bytes - total_evicted; 4084 else 4085 break; 4086 4087 bytes_evicted = arc_evict_state_impl(ml, sublist_idx, 4088 markers[sublist_idx], spa, bytes_remaining); 4089 4090 scan_evicted += bytes_evicted; 4091 total_evicted += bytes_evicted; 4092 4093 /* we've reached the end, wrap to the beginning */ 4094 if (++sublist_idx >= num_sublists) 4095 sublist_idx = 0; 4096 } 4097 4098 /* 4099 * If we didn't evict anything during this scan, we have 4100 * no reason to believe we'll evict more during another 4101 * scan, so break the loop. 4102 */ 4103 if (scan_evicted == 0) { 4104 /* This isn't possible, let's make that obvious */ 4105 ASSERT3S(bytes, !=, 0); 4106 4107 /* 4108 * When bytes is ARC_EVICT_ALL, the only way to 4109 * break the loop is when scan_evicted is zero. 4110 * In that case, we actually have evicted enough, 4111 * so we don't want to increment the kstat. 4112 */ 4113 if (bytes != ARC_EVICT_ALL) { 4114 ASSERT3S(total_evicted, <, bytes); 4115 ARCSTAT_BUMP(arcstat_evict_not_enough); 4116 } 4117 4118 break; 4119 } 4120 } 4121 4122 for (int i = 0; i < num_sublists; i++) { 4123 multilist_sublist_t *mls = multilist_sublist_lock(ml, i); 4124 multilist_sublist_remove(mls, markers[i]); 4125 multilist_sublist_unlock(mls); 4126 } 4127 if (markers != arc_state_evict_markers) 4128 arc_state_free_markers(markers, num_sublists); 4129 4130 return (total_evicted); 4131 } 4132 4133 /* 4134 * Flush all "evictable" data of the given type from the arc state 4135 * specified. This will not evict any "active" buffers (i.e. referenced). 4136 * 4137 * When 'retry' is set to B_FALSE, the function will make a single pass 4138 * over the state and evict any buffers that it can. Since it doesn't 4139 * continually retry the eviction, it might end up leaving some buffers 4140 * in the ARC due to lock misses. 4141 * 4142 * When 'retry' is set to B_TRUE, the function will continually retry the 4143 * eviction until *all* evictable buffers have been removed from the 4144 * state. As a result, if concurrent insertions into the state are 4145 * allowed (e.g. if the ARC isn't shutting down), this function might 4146 * wind up in an infinite loop, continually trying to evict buffers. 4147 */ 4148 static uint64_t 4149 arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type, 4150 boolean_t retry) 4151 { 4152 uint64_t evicted = 0; 4153 4154 while (zfs_refcount_count(&state->arcs_esize[type]) != 0) { 4155 evicted += arc_evict_state(state, type, spa, ARC_EVICT_ALL); 4156 4157 if (!retry) 4158 break; 4159 } 4160 4161 return (evicted); 4162 } 4163 4164 /* 4165 * Evict the specified number of bytes from the state specified. This 4166 * function prevents us from trying to evict more from a state's list 4167 * than is "evictable", and to skip evicting altogether when passed a 4168 * negative value for "bytes". In contrast, arc_evict_state() will 4169 * evict everything it can, when passed a negative value for "bytes". 4170 */ 4171 static uint64_t 4172 arc_evict_impl(arc_state_t *state, arc_buf_contents_t type, int64_t bytes) 4173 { 4174 uint64_t delta; 4175 4176 if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) { 4177 delta = MIN(zfs_refcount_count(&state->arcs_esize[type]), 4178 bytes); 4179 return (arc_evict_state(state, type, 0, delta)); 4180 } 4181 4182 return (0); 4183 } 4184 4185 /* 4186 * Adjust specified fraction, taking into account initial ghost state(s) size, 4187 * ghost hit bytes towards increasing the fraction, ghost hit bytes towards 4188 * decreasing it, plus a balance factor, controlling the decrease rate, used 4189 * to balance metadata vs data. 4190 */ 4191 static uint64_t 4192 arc_evict_adj(uint64_t frac, uint64_t total, uint64_t up, uint64_t down, 4193 uint_t balance) 4194 { 4195 if (total < 8 || up + down == 0) 4196 return (frac); 4197 4198 /* 4199 * We should not have more ghost hits than ghost size, but they 4200 * may get close. Restrict maximum adjustment in that case. 4201 */ 4202 if (up + down >= total / 4) { 4203 uint64_t scale = (up + down) / (total / 8); 4204 up /= scale; 4205 down /= scale; 4206 } 4207 4208 /* Get maximal dynamic range by choosing optimal shifts. */ 4209 int s = highbit64(total); 4210 s = MIN(64 - s, 32); 4211 4212 uint64_t ofrac = (1ULL << 32) - frac; 4213 4214 if (frac >= 4 * ofrac) 4215 up /= frac / (2 * ofrac + 1); 4216 up = (up << s) / (total >> (32 - s)); 4217 if (ofrac >= 4 * frac) 4218 down /= ofrac / (2 * frac + 1); 4219 down = (down << s) / (total >> (32 - s)); 4220 down = down * 100 / balance; 4221 4222 return (frac + up - down); 4223 } 4224 4225 /* 4226 * Evict buffers from the cache, such that arcstat_size is capped by arc_c. 4227 */ 4228 static uint64_t 4229 arc_evict(void) 4230 { 4231 uint64_t asize, bytes, total_evicted = 0; 4232 int64_t e, mrud, mrum, mfud, mfum, w; 4233 static uint64_t ogrd, ogrm, ogfd, ogfm; 4234 static uint64_t gsrd, gsrm, gsfd, gsfm; 4235 uint64_t ngrd, ngrm, ngfd, ngfm; 4236 4237 /* Get current size of ARC states we can evict from. */ 4238 mrud = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_DATA]) + 4239 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]); 4240 mrum = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA]) + 4241 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 4242 mfud = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 4243 mfum = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 4244 uint64_t d = mrud + mfud; 4245 uint64_t m = mrum + mfum; 4246 uint64_t t = d + m; 4247 4248 /* Get ARC ghost hits since last eviction. */ 4249 ngrd = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]); 4250 uint64_t grd = ngrd - ogrd; 4251 ogrd = ngrd; 4252 ngrm = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]); 4253 uint64_t grm = ngrm - ogrm; 4254 ogrm = ngrm; 4255 ngfd = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]); 4256 uint64_t gfd = ngfd - ogfd; 4257 ogfd = ngfd; 4258 ngfm = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]); 4259 uint64_t gfm = ngfm - ogfm; 4260 ogfm = ngfm; 4261 4262 /* Adjust ARC states balance based on ghost hits. */ 4263 arc_meta = arc_evict_adj(arc_meta, gsrd + gsrm + gsfd + gsfm, 4264 grm + gfm, grd + gfd, zfs_arc_meta_balance); 4265 arc_pd = arc_evict_adj(arc_pd, gsrd + gsfd, grd, gfd, 100); 4266 arc_pm = arc_evict_adj(arc_pm, gsrm + gsfm, grm, gfm, 100); 4267 4268 asize = aggsum_value(&arc_sums.arcstat_size); 4269 int64_t wt = t - (asize - arc_c); 4270 4271 /* 4272 * Try to reduce pinned dnodes if more than 3/4 of wanted metadata 4273 * target is not evictable or if they go over arc_dnode_limit. 4274 */ 4275 int64_t prune = 0; 4276 int64_t dn = wmsum_value(&arc_sums.arcstat_dnode_size); 4277 w = wt * (int64_t)(arc_meta >> 16) >> 16; 4278 if (zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA]) + 4279 zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA]) - 4280 zfs_refcount_count(&arc_mru->arcs_esize[ARC_BUFC_METADATA]) - 4281 zfs_refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]) > 4282 w * 3 / 4) { 4283 prune = dn / sizeof (dnode_t) * 4284 zfs_arc_dnode_reduce_percent / 100; 4285 } else if (dn > arc_dnode_limit) { 4286 prune = (dn - arc_dnode_limit) / sizeof (dnode_t) * 4287 zfs_arc_dnode_reduce_percent / 100; 4288 } 4289 if (prune > 0) 4290 arc_prune_async(prune); 4291 4292 /* Evict MRU metadata. */ 4293 w = wt * (int64_t)(arc_meta * arc_pm >> 48) >> 16; 4294 e = MIN((int64_t)(asize - arc_c), (int64_t)(mrum - w)); 4295 bytes = arc_evict_impl(arc_mru, ARC_BUFC_METADATA, e); 4296 total_evicted += bytes; 4297 mrum -= bytes; 4298 asize -= bytes; 4299 4300 /* Evict MFU metadata. */ 4301 w = wt * (int64_t)(arc_meta >> 16) >> 16; 4302 e = MIN((int64_t)(asize - arc_c), (int64_t)(m - w)); 4303 bytes = arc_evict_impl(arc_mfu, ARC_BUFC_METADATA, e); 4304 total_evicted += bytes; 4305 mfum -= bytes; 4306 asize -= bytes; 4307 4308 /* Evict MRU data. */ 4309 wt -= m - total_evicted; 4310 w = wt * (int64_t)(arc_pd >> 16) >> 16; 4311 e = MIN((int64_t)(asize - arc_c), (int64_t)(mrud - w)); 4312 bytes = arc_evict_impl(arc_mru, ARC_BUFC_DATA, e); 4313 total_evicted += bytes; 4314 mrud -= bytes; 4315 asize -= bytes; 4316 4317 /* Evict MFU data. */ 4318 e = asize - arc_c; 4319 bytes = arc_evict_impl(arc_mfu, ARC_BUFC_DATA, e); 4320 mfud -= bytes; 4321 total_evicted += bytes; 4322 4323 /* 4324 * Evict ghost lists 4325 * 4326 * Size of each state's ghost list represents how much that state 4327 * may grow by shrinking the other states. Would it need to shrink 4328 * other states to zero (that is unlikely), its ghost size would be 4329 * equal to sum of other three state sizes. But excessive ghost 4330 * size may result in false ghost hits (too far back), that may 4331 * never result in real cache hits if several states are competing. 4332 * So choose some arbitraty point of 1/2 of other state sizes. 4333 */ 4334 gsrd = (mrum + mfud + mfum) / 2; 4335 e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]) - 4336 gsrd; 4337 (void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_DATA, e); 4338 4339 gsrm = (mrud + mfud + mfum) / 2; 4340 e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]) - 4341 gsrm; 4342 (void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_METADATA, e); 4343 4344 gsfd = (mrud + mrum + mfum) / 2; 4345 e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]) - 4346 gsfd; 4347 (void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_DATA, e); 4348 4349 gsfm = (mrud + mrum + mfud) / 2; 4350 e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]) - 4351 gsfm; 4352 (void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_METADATA, e); 4353 4354 return (total_evicted); 4355 } 4356 4357 void 4358 arc_flush(spa_t *spa, boolean_t retry) 4359 { 4360 uint64_t guid = 0; 4361 4362 /* 4363 * If retry is B_TRUE, a spa must not be specified since we have 4364 * no good way to determine if all of a spa's buffers have been 4365 * evicted from an arc state. 4366 */ 4367 ASSERT(!retry || spa == NULL); 4368 4369 if (spa != NULL) 4370 guid = spa_load_guid(spa); 4371 4372 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry); 4373 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry); 4374 4375 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry); 4376 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry); 4377 4378 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry); 4379 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry); 4380 4381 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry); 4382 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry); 4383 4384 (void) arc_flush_state(arc_uncached, guid, ARC_BUFC_DATA, retry); 4385 (void) arc_flush_state(arc_uncached, guid, ARC_BUFC_METADATA, retry); 4386 } 4387 4388 void 4389 arc_reduce_target_size(int64_t to_free) 4390 { 4391 uint64_t c = arc_c; 4392 4393 if (c <= arc_c_min) 4394 return; 4395 4396 /* 4397 * All callers want the ARC to actually evict (at least) this much 4398 * memory. Therefore we reduce from the lower of the current size and 4399 * the target size. This way, even if arc_c is much higher than 4400 * arc_size (as can be the case after many calls to arc_freed(), we will 4401 * immediately have arc_c < arc_size and therefore the arc_evict_zthr 4402 * will evict. 4403 */ 4404 uint64_t asize = aggsum_value(&arc_sums.arcstat_size); 4405 if (asize < c) 4406 to_free += c - asize; 4407 arc_c = MAX((int64_t)c - to_free, (int64_t)arc_c_min); 4408 4409 /* See comment in arc_evict_cb_check() on why lock+flag */ 4410 mutex_enter(&arc_evict_lock); 4411 arc_evict_needed = B_TRUE; 4412 mutex_exit(&arc_evict_lock); 4413 zthr_wakeup(arc_evict_zthr); 4414 } 4415 4416 /* 4417 * Determine if the system is under memory pressure and is asking 4418 * to reclaim memory. A return value of B_TRUE indicates that the system 4419 * is under memory pressure and that the arc should adjust accordingly. 4420 */ 4421 boolean_t 4422 arc_reclaim_needed(void) 4423 { 4424 return (arc_available_memory() < 0); 4425 } 4426 4427 void 4428 arc_kmem_reap_soon(void) 4429 { 4430 size_t i; 4431 kmem_cache_t *prev_cache = NULL; 4432 kmem_cache_t *prev_data_cache = NULL; 4433 4434 #ifdef _KERNEL 4435 #if defined(_ILP32) 4436 /* 4437 * Reclaim unused memory from all kmem caches. 4438 */ 4439 kmem_reap(); 4440 #endif 4441 #endif 4442 4443 for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) { 4444 #if defined(_ILP32) 4445 /* reach upper limit of cache size on 32-bit */ 4446 if (zio_buf_cache[i] == NULL) 4447 break; 4448 #endif 4449 if (zio_buf_cache[i] != prev_cache) { 4450 prev_cache = zio_buf_cache[i]; 4451 kmem_cache_reap_now(zio_buf_cache[i]); 4452 } 4453 if (zio_data_buf_cache[i] != prev_data_cache) { 4454 prev_data_cache = zio_data_buf_cache[i]; 4455 kmem_cache_reap_now(zio_data_buf_cache[i]); 4456 } 4457 } 4458 kmem_cache_reap_now(buf_cache); 4459 kmem_cache_reap_now(hdr_full_cache); 4460 kmem_cache_reap_now(hdr_l2only_cache); 4461 kmem_cache_reap_now(zfs_btree_leaf_cache); 4462 abd_cache_reap_now(); 4463 } 4464 4465 static boolean_t 4466 arc_evict_cb_check(void *arg, zthr_t *zthr) 4467 { 4468 (void) arg, (void) zthr; 4469 4470 #ifdef ZFS_DEBUG 4471 /* 4472 * This is necessary in order to keep the kstat information 4473 * up to date for tools that display kstat data such as the 4474 * mdb ::arc dcmd and the Linux crash utility. These tools 4475 * typically do not call kstat's update function, but simply 4476 * dump out stats from the most recent update. Without 4477 * this call, these commands may show stale stats for the 4478 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even 4479 * with this call, the data might be out of date if the 4480 * evict thread hasn't been woken recently; but that should 4481 * suffice. The arc_state_t structures can be queried 4482 * directly if more accurate information is needed. 4483 */ 4484 if (arc_ksp != NULL) 4485 arc_ksp->ks_update(arc_ksp, KSTAT_READ); 4486 #endif 4487 4488 /* 4489 * We have to rely on arc_wait_for_eviction() to tell us when to 4490 * evict, rather than checking if we are overflowing here, so that we 4491 * are sure to not leave arc_wait_for_eviction() waiting on aew_cv. 4492 * If we have become "not overflowing" since arc_wait_for_eviction() 4493 * checked, we need to wake it up. We could broadcast the CV here, 4494 * but arc_wait_for_eviction() may have not yet gone to sleep. We 4495 * would need to use a mutex to ensure that this function doesn't 4496 * broadcast until arc_wait_for_eviction() has gone to sleep (e.g. 4497 * the arc_evict_lock). However, the lock ordering of such a lock 4498 * would necessarily be incorrect with respect to the zthr_lock, 4499 * which is held before this function is called, and is held by 4500 * arc_wait_for_eviction() when it calls zthr_wakeup(). 4501 */ 4502 if (arc_evict_needed) 4503 return (B_TRUE); 4504 4505 /* 4506 * If we have buffers in uncached state, evict them periodically. 4507 */ 4508 return ((zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_DATA]) + 4509 zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]) && 4510 ddi_get_lbolt() - arc_last_uncached_flush > 4511 MSEC_TO_TICK(arc_min_prefetch_ms / 2))); 4512 } 4513 4514 /* 4515 * Keep arc_size under arc_c by running arc_evict which evicts data 4516 * from the ARC. 4517 */ 4518 static void 4519 arc_evict_cb(void *arg, zthr_t *zthr) 4520 { 4521 (void) arg, (void) zthr; 4522 4523 uint64_t evicted = 0; 4524 fstrans_cookie_t cookie = spl_fstrans_mark(); 4525 4526 /* Always try to evict from uncached state. */ 4527 arc_last_uncached_flush = ddi_get_lbolt(); 4528 evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_DATA, B_FALSE); 4529 evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_METADATA, B_FALSE); 4530 4531 /* Evict from other states only if told to. */ 4532 if (arc_evict_needed) 4533 evicted += arc_evict(); 4534 4535 /* 4536 * If evicted is zero, we couldn't evict anything 4537 * via arc_evict(). This could be due to hash lock 4538 * collisions, but more likely due to the majority of 4539 * arc buffers being unevictable. Therefore, even if 4540 * arc_size is above arc_c, another pass is unlikely to 4541 * be helpful and could potentially cause us to enter an 4542 * infinite loop. Additionally, zthr_iscancelled() is 4543 * checked here so that if the arc is shutting down, the 4544 * broadcast will wake any remaining arc evict waiters. 4545 */ 4546 mutex_enter(&arc_evict_lock); 4547 arc_evict_needed = !zthr_iscancelled(arc_evict_zthr) && 4548 evicted > 0 && aggsum_compare(&arc_sums.arcstat_size, arc_c) > 0; 4549 if (!arc_evict_needed) { 4550 /* 4551 * We're either no longer overflowing, or we 4552 * can't evict anything more, so we should wake 4553 * arc_get_data_impl() sooner. 4554 */ 4555 arc_evict_waiter_t *aw; 4556 while ((aw = list_remove_head(&arc_evict_waiters)) != NULL) { 4557 cv_broadcast(&aw->aew_cv); 4558 } 4559 arc_set_need_free(); 4560 } 4561 mutex_exit(&arc_evict_lock); 4562 spl_fstrans_unmark(cookie); 4563 } 4564 4565 static boolean_t 4566 arc_reap_cb_check(void *arg, zthr_t *zthr) 4567 { 4568 (void) arg, (void) zthr; 4569 4570 int64_t free_memory = arc_available_memory(); 4571 static int reap_cb_check_counter = 0; 4572 4573 /* 4574 * If a kmem reap is already active, don't schedule more. We must 4575 * check for this because kmem_cache_reap_soon() won't actually 4576 * block on the cache being reaped (this is to prevent callers from 4577 * becoming implicitly blocked by a system-wide kmem reap -- which, 4578 * on a system with many, many full magazines, can take minutes). 4579 */ 4580 if (!kmem_cache_reap_active() && free_memory < 0) { 4581 4582 arc_no_grow = B_TRUE; 4583 arc_warm = B_TRUE; 4584 /* 4585 * Wait at least zfs_grow_retry (default 5) seconds 4586 * before considering growing. 4587 */ 4588 arc_growtime = gethrtime() + SEC2NSEC(arc_grow_retry); 4589 return (B_TRUE); 4590 } else if (free_memory < arc_c >> arc_no_grow_shift) { 4591 arc_no_grow = B_TRUE; 4592 } else if (gethrtime() >= arc_growtime) { 4593 arc_no_grow = B_FALSE; 4594 } 4595 4596 /* 4597 * Called unconditionally every 60 seconds to reclaim unused 4598 * zstd compression and decompression context. This is done 4599 * here to avoid the need for an independent thread. 4600 */ 4601 if (!((reap_cb_check_counter++) % 60)) 4602 zfs_zstd_cache_reap_now(); 4603 4604 return (B_FALSE); 4605 } 4606 4607 /* 4608 * Keep enough free memory in the system by reaping the ARC's kmem 4609 * caches. To cause more slabs to be reapable, we may reduce the 4610 * target size of the cache (arc_c), causing the arc_evict_cb() 4611 * to free more buffers. 4612 */ 4613 static void 4614 arc_reap_cb(void *arg, zthr_t *zthr) 4615 { 4616 (void) arg, (void) zthr; 4617 4618 int64_t free_memory; 4619 fstrans_cookie_t cookie = spl_fstrans_mark(); 4620 4621 /* 4622 * Kick off asynchronous kmem_reap()'s of all our caches. 4623 */ 4624 arc_kmem_reap_soon(); 4625 4626 /* 4627 * Wait at least arc_kmem_cache_reap_retry_ms between 4628 * arc_kmem_reap_soon() calls. Without this check it is possible to 4629 * end up in a situation where we spend lots of time reaping 4630 * caches, while we're near arc_c_min. Waiting here also gives the 4631 * subsequent free memory check a chance of finding that the 4632 * asynchronous reap has already freed enough memory, and we don't 4633 * need to call arc_reduce_target_size(). 4634 */ 4635 delay((hz * arc_kmem_cache_reap_retry_ms + 999) / 1000); 4636 4637 /* 4638 * Reduce the target size as needed to maintain the amount of free 4639 * memory in the system at a fraction of the arc_size (1/128th by 4640 * default). If oversubscribed (free_memory < 0) then reduce the 4641 * target arc_size by the deficit amount plus the fractional 4642 * amount. If free memory is positive but less than the fractional 4643 * amount, reduce by what is needed to hit the fractional amount. 4644 */ 4645 free_memory = arc_available_memory(); 4646 4647 int64_t can_free = arc_c - arc_c_min; 4648 if (can_free > 0) { 4649 int64_t to_free = (can_free >> arc_shrink_shift) - free_memory; 4650 if (to_free > 0) 4651 arc_reduce_target_size(to_free); 4652 } 4653 spl_fstrans_unmark(cookie); 4654 } 4655 4656 #ifdef _KERNEL 4657 /* 4658 * Determine the amount of memory eligible for eviction contained in the 4659 * ARC. All clean data reported by the ghost lists can always be safely 4660 * evicted. Due to arc_c_min, the same does not hold for all clean data 4661 * contained by the regular mru and mfu lists. 4662 * 4663 * In the case of the regular mru and mfu lists, we need to report as 4664 * much clean data as possible, such that evicting that same reported 4665 * data will not bring arc_size below arc_c_min. Thus, in certain 4666 * circumstances, the total amount of clean data in the mru and mfu 4667 * lists might not actually be evictable. 4668 * 4669 * The following two distinct cases are accounted for: 4670 * 4671 * 1. The sum of the amount of dirty data contained by both the mru and 4672 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 4673 * is greater than or equal to arc_c_min. 4674 * (i.e. amount of dirty data >= arc_c_min) 4675 * 4676 * This is the easy case; all clean data contained by the mru and mfu 4677 * lists is evictable. Evicting all clean data can only drop arc_size 4678 * to the amount of dirty data, which is greater than arc_c_min. 4679 * 4680 * 2. The sum of the amount of dirty data contained by both the mru and 4681 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 4682 * is less than arc_c_min. 4683 * (i.e. arc_c_min > amount of dirty data) 4684 * 4685 * 2.1. arc_size is greater than or equal arc_c_min. 4686 * (i.e. arc_size >= arc_c_min > amount of dirty data) 4687 * 4688 * In this case, not all clean data from the regular mru and mfu 4689 * lists is actually evictable; we must leave enough clean data 4690 * to keep arc_size above arc_c_min. Thus, the maximum amount of 4691 * evictable data from the two lists combined, is exactly the 4692 * difference between arc_size and arc_c_min. 4693 * 4694 * 2.2. arc_size is less than arc_c_min 4695 * (i.e. arc_c_min > arc_size > amount of dirty data) 4696 * 4697 * In this case, none of the data contained in the mru and mfu 4698 * lists is evictable, even if it's clean. Since arc_size is 4699 * already below arc_c_min, evicting any more would only 4700 * increase this negative difference. 4701 */ 4702 4703 #endif /* _KERNEL */ 4704 4705 /* 4706 * Adapt arc info given the number of bytes we are trying to add and 4707 * the state that we are coming from. This function is only called 4708 * when we are adding new content to the cache. 4709 */ 4710 static void 4711 arc_adapt(uint64_t bytes) 4712 { 4713 /* 4714 * Wake reap thread if we do not have any available memory 4715 */ 4716 if (arc_reclaim_needed()) { 4717 zthr_wakeup(arc_reap_zthr); 4718 return; 4719 } 4720 4721 if (arc_no_grow) 4722 return; 4723 4724 if (arc_c >= arc_c_max) 4725 return; 4726 4727 /* 4728 * If we're within (2 * maxblocksize) bytes of the target 4729 * cache size, increment the target cache size 4730 */ 4731 if (aggsum_upper_bound(&arc_sums.arcstat_size) + 4732 2 * SPA_MAXBLOCKSIZE >= arc_c) { 4733 uint64_t dc = MAX(bytes, SPA_OLD_MAXBLOCKSIZE); 4734 if (atomic_add_64_nv(&arc_c, dc) > arc_c_max) 4735 arc_c = arc_c_max; 4736 } 4737 } 4738 4739 /* 4740 * Check if arc_size has grown past our upper threshold, determined by 4741 * zfs_arc_overflow_shift. 4742 */ 4743 static arc_ovf_level_t 4744 arc_is_overflowing(boolean_t use_reserve) 4745 { 4746 /* Always allow at least one block of overflow */ 4747 int64_t overflow = MAX(SPA_MAXBLOCKSIZE, 4748 arc_c >> zfs_arc_overflow_shift); 4749 4750 /* 4751 * We just compare the lower bound here for performance reasons. Our 4752 * primary goals are to make sure that the arc never grows without 4753 * bound, and that it can reach its maximum size. This check 4754 * accomplishes both goals. The maximum amount we could run over by is 4755 * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block 4756 * in the ARC. In practice, that's in the tens of MB, which is low 4757 * enough to be safe. 4758 */ 4759 int64_t over = aggsum_lower_bound(&arc_sums.arcstat_size) - 4760 arc_c - overflow / 2; 4761 if (!use_reserve) 4762 overflow /= 2; 4763 return (over < 0 ? ARC_OVF_NONE : 4764 over < overflow ? ARC_OVF_SOME : ARC_OVF_SEVERE); 4765 } 4766 4767 static abd_t * 4768 arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, const void *tag, 4769 int alloc_flags) 4770 { 4771 arc_buf_contents_t type = arc_buf_type(hdr); 4772 4773 arc_get_data_impl(hdr, size, tag, alloc_flags); 4774 if (alloc_flags & ARC_HDR_ALLOC_LINEAR) 4775 return (abd_alloc_linear(size, type == ARC_BUFC_METADATA)); 4776 else 4777 return (abd_alloc(size, type == ARC_BUFC_METADATA)); 4778 } 4779 4780 static void * 4781 arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, const void *tag) 4782 { 4783 arc_buf_contents_t type = arc_buf_type(hdr); 4784 4785 arc_get_data_impl(hdr, size, tag, 0); 4786 if (type == ARC_BUFC_METADATA) { 4787 return (zio_buf_alloc(size)); 4788 } else { 4789 ASSERT(type == ARC_BUFC_DATA); 4790 return (zio_data_buf_alloc(size)); 4791 } 4792 } 4793 4794 /* 4795 * Wait for the specified amount of data (in bytes) to be evicted from the 4796 * ARC, and for there to be sufficient free memory in the system. Waiting for 4797 * eviction ensures that the memory used by the ARC decreases. Waiting for 4798 * free memory ensures that the system won't run out of free pages, regardless 4799 * of ARC behavior and settings. See arc_lowmem_init(). 4800 */ 4801 void 4802 arc_wait_for_eviction(uint64_t amount, boolean_t use_reserve) 4803 { 4804 switch (arc_is_overflowing(use_reserve)) { 4805 case ARC_OVF_NONE: 4806 return; 4807 case ARC_OVF_SOME: 4808 /* 4809 * This is a bit racy without taking arc_evict_lock, but the 4810 * worst that can happen is we either call zthr_wakeup() extra 4811 * time due to race with other thread here, or the set flag 4812 * get cleared by arc_evict_cb(), which is unlikely due to 4813 * big hysteresis, but also not important since at this level 4814 * of overflow the eviction is purely advisory. Same time 4815 * taking the global lock here every time without waiting for 4816 * the actual eviction creates a significant lock contention. 4817 */ 4818 if (!arc_evict_needed) { 4819 arc_evict_needed = B_TRUE; 4820 zthr_wakeup(arc_evict_zthr); 4821 } 4822 return; 4823 case ARC_OVF_SEVERE: 4824 default: 4825 { 4826 arc_evict_waiter_t aw; 4827 list_link_init(&aw.aew_node); 4828 cv_init(&aw.aew_cv, NULL, CV_DEFAULT, NULL); 4829 4830 uint64_t last_count = 0; 4831 mutex_enter(&arc_evict_lock); 4832 if (!list_is_empty(&arc_evict_waiters)) { 4833 arc_evict_waiter_t *last = 4834 list_tail(&arc_evict_waiters); 4835 last_count = last->aew_count; 4836 } else if (!arc_evict_needed) { 4837 arc_evict_needed = B_TRUE; 4838 zthr_wakeup(arc_evict_zthr); 4839 } 4840 /* 4841 * Note, the last waiter's count may be less than 4842 * arc_evict_count if we are low on memory in which 4843 * case arc_evict_state_impl() may have deferred 4844 * wakeups (but still incremented arc_evict_count). 4845 */ 4846 aw.aew_count = MAX(last_count, arc_evict_count) + amount; 4847 4848 list_insert_tail(&arc_evict_waiters, &aw); 4849 4850 arc_set_need_free(); 4851 4852 DTRACE_PROBE3(arc__wait__for__eviction, 4853 uint64_t, amount, 4854 uint64_t, arc_evict_count, 4855 uint64_t, aw.aew_count); 4856 4857 /* 4858 * We will be woken up either when arc_evict_count reaches 4859 * aew_count, or when the ARC is no longer overflowing and 4860 * eviction completes. 4861 * In case of "false" wakeup, we will still be on the list. 4862 */ 4863 do { 4864 cv_wait(&aw.aew_cv, &arc_evict_lock); 4865 } while (list_link_active(&aw.aew_node)); 4866 mutex_exit(&arc_evict_lock); 4867 4868 cv_destroy(&aw.aew_cv); 4869 } 4870 } 4871 } 4872 4873 /* 4874 * Allocate a block and return it to the caller. If we are hitting the 4875 * hard limit for the cache size, we must sleep, waiting for the eviction 4876 * thread to catch up. If we're past the target size but below the hard 4877 * limit, we'll only signal the reclaim thread and continue on. 4878 */ 4879 static void 4880 arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag, 4881 int alloc_flags) 4882 { 4883 arc_adapt(size); 4884 4885 /* 4886 * If arc_size is currently overflowing, we must be adding data 4887 * faster than we are evicting. To ensure we don't compound the 4888 * problem by adding more data and forcing arc_size to grow even 4889 * further past it's target size, we wait for the eviction thread to 4890 * make some progress. We also wait for there to be sufficient free 4891 * memory in the system, as measured by arc_free_memory(). 4892 * 4893 * Specifically, we wait for zfs_arc_eviction_pct percent of the 4894 * requested size to be evicted. This should be more than 100%, to 4895 * ensure that that progress is also made towards getting arc_size 4896 * under arc_c. See the comment above zfs_arc_eviction_pct. 4897 */ 4898 arc_wait_for_eviction(size * zfs_arc_eviction_pct / 100, 4899 alloc_flags & ARC_HDR_USE_RESERVE); 4900 4901 arc_buf_contents_t type = arc_buf_type(hdr); 4902 if (type == ARC_BUFC_METADATA) { 4903 arc_space_consume(size, ARC_SPACE_META); 4904 } else { 4905 arc_space_consume(size, ARC_SPACE_DATA); 4906 } 4907 4908 /* 4909 * Update the state size. Note that ghost states have a 4910 * "ghost size" and so don't need to be updated. 4911 */ 4912 arc_state_t *state = hdr->b_l1hdr.b_state; 4913 if (!GHOST_STATE(state)) { 4914 4915 (void) zfs_refcount_add_many(&state->arcs_size[type], size, 4916 tag); 4917 4918 /* 4919 * If this is reached via arc_read, the link is 4920 * protected by the hash lock. If reached via 4921 * arc_buf_alloc, the header should not be accessed by 4922 * any other thread. And, if reached via arc_read_done, 4923 * the hash lock will protect it if it's found in the 4924 * hash table; otherwise no other thread should be 4925 * trying to [add|remove]_reference it. 4926 */ 4927 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 4928 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 4929 (void) zfs_refcount_add_many(&state->arcs_esize[type], 4930 size, tag); 4931 } 4932 } 4933 } 4934 4935 static void 4936 arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size, 4937 const void *tag) 4938 { 4939 arc_free_data_impl(hdr, size, tag); 4940 abd_free(abd); 4941 } 4942 4943 static void 4944 arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, const void *tag) 4945 { 4946 arc_buf_contents_t type = arc_buf_type(hdr); 4947 4948 arc_free_data_impl(hdr, size, tag); 4949 if (type == ARC_BUFC_METADATA) { 4950 zio_buf_free(buf, size); 4951 } else { 4952 ASSERT(type == ARC_BUFC_DATA); 4953 zio_data_buf_free(buf, size); 4954 } 4955 } 4956 4957 /* 4958 * Free the arc data buffer. 4959 */ 4960 static void 4961 arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag) 4962 { 4963 arc_state_t *state = hdr->b_l1hdr.b_state; 4964 arc_buf_contents_t type = arc_buf_type(hdr); 4965 4966 /* protected by hash lock, if in the hash table */ 4967 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 4968 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 4969 ASSERT(state != arc_anon && state != arc_l2c_only); 4970 4971 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 4972 size, tag); 4973 } 4974 (void) zfs_refcount_remove_many(&state->arcs_size[type], size, tag); 4975 4976 VERIFY3U(hdr->b_type, ==, type); 4977 if (type == ARC_BUFC_METADATA) { 4978 arc_space_return(size, ARC_SPACE_META); 4979 } else { 4980 ASSERT(type == ARC_BUFC_DATA); 4981 arc_space_return(size, ARC_SPACE_DATA); 4982 } 4983 } 4984 4985 /* 4986 * This routine is called whenever a buffer is accessed. 4987 */ 4988 static void 4989 arc_access(arc_buf_hdr_t *hdr, arc_flags_t arc_flags, boolean_t hit) 4990 { 4991 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 4992 ASSERT(HDR_HAS_L1HDR(hdr)); 4993 4994 /* 4995 * Update buffer prefetch status. 4996 */ 4997 boolean_t was_prefetch = HDR_PREFETCH(hdr); 4998 boolean_t now_prefetch = arc_flags & ARC_FLAG_PREFETCH; 4999 if (was_prefetch != now_prefetch) { 5000 if (was_prefetch) { 5001 ARCSTAT_CONDSTAT(hit, demand_hit, demand_iohit, 5002 HDR_PRESCIENT_PREFETCH(hdr), prescient, predictive, 5003 prefetch); 5004 } 5005 if (HDR_HAS_L2HDR(hdr)) 5006 l2arc_hdr_arcstats_decrement_state(hdr); 5007 if (was_prefetch) { 5008 arc_hdr_clear_flags(hdr, 5009 ARC_FLAG_PREFETCH | ARC_FLAG_PRESCIENT_PREFETCH); 5010 } else { 5011 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 5012 } 5013 if (HDR_HAS_L2HDR(hdr)) 5014 l2arc_hdr_arcstats_increment_state(hdr); 5015 } 5016 if (now_prefetch) { 5017 if (arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) { 5018 arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); 5019 ARCSTAT_BUMP(arcstat_prescient_prefetch); 5020 } else { 5021 ARCSTAT_BUMP(arcstat_predictive_prefetch); 5022 } 5023 } 5024 if (arc_flags & ARC_FLAG_L2CACHE) 5025 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 5026 5027 clock_t now = ddi_get_lbolt(); 5028 if (hdr->b_l1hdr.b_state == arc_anon) { 5029 arc_state_t *new_state; 5030 /* 5031 * This buffer is not in the cache, and does not appear in 5032 * our "ghost" lists. Add it to the MRU or uncached state. 5033 */ 5034 ASSERT0(hdr->b_l1hdr.b_arc_access); 5035 hdr->b_l1hdr.b_arc_access = now; 5036 if (HDR_UNCACHED(hdr)) { 5037 new_state = arc_uncached; 5038 DTRACE_PROBE1(new_state__uncached, arc_buf_hdr_t *, 5039 hdr); 5040 } else { 5041 new_state = arc_mru; 5042 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5043 } 5044 arc_change_state(new_state, hdr); 5045 } else if (hdr->b_l1hdr.b_state == arc_mru) { 5046 /* 5047 * This buffer has been accessed once recently and either 5048 * its read is still in progress or it is in the cache. 5049 */ 5050 if (HDR_IO_IN_PROGRESS(hdr)) { 5051 hdr->b_l1hdr.b_arc_access = now; 5052 return; 5053 } 5054 hdr->b_l1hdr.b_mru_hits++; 5055 ARCSTAT_BUMP(arcstat_mru_hits); 5056 5057 /* 5058 * If the previous access was a prefetch, then it already 5059 * handled possible promotion, so nothing more to do for now. 5060 */ 5061 if (was_prefetch) { 5062 hdr->b_l1hdr.b_arc_access = now; 5063 return; 5064 } 5065 5066 /* 5067 * If more than ARC_MINTIME have passed from the previous 5068 * hit, promote the buffer to the MFU state. 5069 */ 5070 if (ddi_time_after(now, hdr->b_l1hdr.b_arc_access + 5071 ARC_MINTIME)) { 5072 hdr->b_l1hdr.b_arc_access = now; 5073 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5074 arc_change_state(arc_mfu, hdr); 5075 } 5076 } else if (hdr->b_l1hdr.b_state == arc_mru_ghost) { 5077 arc_state_t *new_state; 5078 /* 5079 * This buffer has been accessed once recently, but was 5080 * evicted from the cache. Would we have bigger MRU, it 5081 * would be an MRU hit, so handle it the same way, except 5082 * we don't need to check the previous access time. 5083 */ 5084 hdr->b_l1hdr.b_mru_ghost_hits++; 5085 ARCSTAT_BUMP(arcstat_mru_ghost_hits); 5086 hdr->b_l1hdr.b_arc_access = now; 5087 wmsum_add(&arc_mru_ghost->arcs_hits[arc_buf_type(hdr)], 5088 arc_hdr_size(hdr)); 5089 if (was_prefetch) { 5090 new_state = arc_mru; 5091 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5092 } else { 5093 new_state = arc_mfu; 5094 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5095 } 5096 arc_change_state(new_state, hdr); 5097 } else if (hdr->b_l1hdr.b_state == arc_mfu) { 5098 /* 5099 * This buffer has been accessed more than once and either 5100 * still in the cache or being restored from one of ghosts. 5101 */ 5102 if (!HDR_IO_IN_PROGRESS(hdr)) { 5103 hdr->b_l1hdr.b_mfu_hits++; 5104 ARCSTAT_BUMP(arcstat_mfu_hits); 5105 } 5106 hdr->b_l1hdr.b_arc_access = now; 5107 } else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) { 5108 /* 5109 * This buffer has been accessed more than once recently, but 5110 * has been evicted from the cache. Would we have bigger MFU 5111 * it would stay in cache, so move it back to MFU state. 5112 */ 5113 hdr->b_l1hdr.b_mfu_ghost_hits++; 5114 ARCSTAT_BUMP(arcstat_mfu_ghost_hits); 5115 hdr->b_l1hdr.b_arc_access = now; 5116 wmsum_add(&arc_mfu_ghost->arcs_hits[arc_buf_type(hdr)], 5117 arc_hdr_size(hdr)); 5118 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5119 arc_change_state(arc_mfu, hdr); 5120 } else if (hdr->b_l1hdr.b_state == arc_uncached) { 5121 /* 5122 * This buffer is uncacheable, but we got a hit. Probably 5123 * a demand read after prefetch. Nothing more to do here. 5124 */ 5125 if (!HDR_IO_IN_PROGRESS(hdr)) 5126 ARCSTAT_BUMP(arcstat_uncached_hits); 5127 hdr->b_l1hdr.b_arc_access = now; 5128 } else if (hdr->b_l1hdr.b_state == arc_l2c_only) { 5129 /* 5130 * This buffer is on the 2nd Level ARC and was not accessed 5131 * for a long time, so treat it as new and put into MRU. 5132 */ 5133 hdr->b_l1hdr.b_arc_access = now; 5134 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5135 arc_change_state(arc_mru, hdr); 5136 } else { 5137 cmn_err(CE_PANIC, "invalid arc state 0x%p", 5138 hdr->b_l1hdr.b_state); 5139 } 5140 } 5141 5142 /* 5143 * This routine is called by dbuf_hold() to update the arc_access() state 5144 * which otherwise would be skipped for entries in the dbuf cache. 5145 */ 5146 void 5147 arc_buf_access(arc_buf_t *buf) 5148 { 5149 arc_buf_hdr_t *hdr = buf->b_hdr; 5150 5151 /* 5152 * Avoid taking the hash_lock when possible as an optimization. 5153 * The header must be checked again under the hash_lock in order 5154 * to handle the case where it is concurrently being released. 5155 */ 5156 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) 5157 return; 5158 5159 kmutex_t *hash_lock = HDR_LOCK(hdr); 5160 mutex_enter(hash_lock); 5161 5162 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { 5163 mutex_exit(hash_lock); 5164 ARCSTAT_BUMP(arcstat_access_skip); 5165 return; 5166 } 5167 5168 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5169 hdr->b_l1hdr.b_state == arc_mfu || 5170 hdr->b_l1hdr.b_state == arc_uncached); 5171 5172 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5173 arc_access(hdr, 0, B_TRUE); 5174 mutex_exit(hash_lock); 5175 5176 ARCSTAT_BUMP(arcstat_hits); 5177 ARCSTAT_CONDSTAT(B_TRUE /* demand */, demand, prefetch, 5178 !HDR_ISTYPE_METADATA(hdr), data, metadata, hits); 5179 } 5180 5181 /* a generic arc_read_done_func_t which you can use */ 5182 void 5183 arc_bcopy_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5184 arc_buf_t *buf, void *arg) 5185 { 5186 (void) zio, (void) zb, (void) bp; 5187 5188 if (buf == NULL) 5189 return; 5190 5191 memcpy(arg, buf->b_data, arc_buf_size(buf)); 5192 arc_buf_destroy(buf, arg); 5193 } 5194 5195 /* a generic arc_read_done_func_t */ 5196 void 5197 arc_getbuf_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5198 arc_buf_t *buf, void *arg) 5199 { 5200 (void) zb, (void) bp; 5201 arc_buf_t **bufp = arg; 5202 5203 if (buf == NULL) { 5204 ASSERT(zio == NULL || zio->io_error != 0); 5205 *bufp = NULL; 5206 } else { 5207 ASSERT(zio == NULL || zio->io_error == 0); 5208 *bufp = buf; 5209 ASSERT(buf->b_data != NULL); 5210 } 5211 } 5212 5213 static void 5214 arc_hdr_verify(arc_buf_hdr_t *hdr, blkptr_t *bp) 5215 { 5216 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 5217 ASSERT3U(HDR_GET_PSIZE(hdr), ==, 0); 5218 ASSERT3U(arc_hdr_get_compress(hdr), ==, ZIO_COMPRESS_OFF); 5219 } else { 5220 if (HDR_COMPRESSION_ENABLED(hdr)) { 5221 ASSERT3U(arc_hdr_get_compress(hdr), ==, 5222 BP_GET_COMPRESS(bp)); 5223 } 5224 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 5225 ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp)); 5226 ASSERT3U(!!HDR_PROTECTED(hdr), ==, BP_IS_PROTECTED(bp)); 5227 } 5228 } 5229 5230 static void 5231 arc_read_done(zio_t *zio) 5232 { 5233 blkptr_t *bp = zio->io_bp; 5234 arc_buf_hdr_t *hdr = zio->io_private; 5235 kmutex_t *hash_lock = NULL; 5236 arc_callback_t *callback_list; 5237 arc_callback_t *acb; 5238 5239 /* 5240 * The hdr was inserted into hash-table and removed from lists 5241 * prior to starting I/O. We should find this header, since 5242 * it's in the hash table, and it should be legit since it's 5243 * not possible to evict it during the I/O. The only possible 5244 * reason for it not to be found is if we were freed during the 5245 * read. 5246 */ 5247 if (HDR_IN_HASH_TABLE(hdr)) { 5248 arc_buf_hdr_t *found; 5249 5250 ASSERT3U(hdr->b_birth, ==, BP_PHYSICAL_BIRTH(zio->io_bp)); 5251 ASSERT3U(hdr->b_dva.dva_word[0], ==, 5252 BP_IDENTITY(zio->io_bp)->dva_word[0]); 5253 ASSERT3U(hdr->b_dva.dva_word[1], ==, 5254 BP_IDENTITY(zio->io_bp)->dva_word[1]); 5255 5256 found = buf_hash_find(hdr->b_spa, zio->io_bp, &hash_lock); 5257 5258 ASSERT((found == hdr && 5259 DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) || 5260 (found == hdr && HDR_L2_READING(hdr))); 5261 ASSERT3P(hash_lock, !=, NULL); 5262 } 5263 5264 if (BP_IS_PROTECTED(bp)) { 5265 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 5266 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 5267 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 5268 hdr->b_crypt_hdr.b_iv); 5269 5270 if (zio->io_error == 0) { 5271 if (BP_GET_TYPE(bp) == DMU_OT_INTENT_LOG) { 5272 void *tmpbuf; 5273 5274 tmpbuf = abd_borrow_buf_copy(zio->io_abd, 5275 sizeof (zil_chain_t)); 5276 zio_crypt_decode_mac_zil(tmpbuf, 5277 hdr->b_crypt_hdr.b_mac); 5278 abd_return_buf(zio->io_abd, tmpbuf, 5279 sizeof (zil_chain_t)); 5280 } else { 5281 zio_crypt_decode_mac_bp(bp, 5282 hdr->b_crypt_hdr.b_mac); 5283 } 5284 } 5285 } 5286 5287 if (zio->io_error == 0) { 5288 /* byteswap if necessary */ 5289 if (BP_SHOULD_BYTESWAP(zio->io_bp)) { 5290 if (BP_GET_LEVEL(zio->io_bp) > 0) { 5291 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 5292 } else { 5293 hdr->b_l1hdr.b_byteswap = 5294 DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp)); 5295 } 5296 } else { 5297 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 5298 } 5299 if (!HDR_L2_READING(hdr)) { 5300 hdr->b_complevel = zio->io_prop.zp_complevel; 5301 } 5302 } 5303 5304 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED); 5305 if (l2arc_noprefetch && HDR_PREFETCH(hdr)) 5306 arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE); 5307 5308 callback_list = hdr->b_l1hdr.b_acb; 5309 ASSERT3P(callback_list, !=, NULL); 5310 hdr->b_l1hdr.b_acb = NULL; 5311 5312 /* 5313 * If a read request has a callback (i.e. acb_done is not NULL), then we 5314 * make a buf containing the data according to the parameters which were 5315 * passed in. The implementation of arc_buf_alloc_impl() ensures that we 5316 * aren't needlessly decompressing the data multiple times. 5317 */ 5318 int callback_cnt = 0; 5319 for (acb = callback_list; acb != NULL; acb = acb->acb_next) { 5320 5321 /* We need the last one to call below in original order. */ 5322 callback_list = acb; 5323 5324 if (!acb->acb_done || acb->acb_nobuf) 5325 continue; 5326 5327 callback_cnt++; 5328 5329 if (zio->io_error != 0) 5330 continue; 5331 5332 int error = arc_buf_alloc_impl(hdr, zio->io_spa, 5333 &acb->acb_zb, acb->acb_private, acb->acb_encrypted, 5334 acb->acb_compressed, acb->acb_noauth, B_TRUE, 5335 &acb->acb_buf); 5336 5337 /* 5338 * Assert non-speculative zios didn't fail because an 5339 * encryption key wasn't loaded 5340 */ 5341 ASSERT((zio->io_flags & ZIO_FLAG_SPECULATIVE) || 5342 error != EACCES); 5343 5344 /* 5345 * If we failed to decrypt, report an error now (as the zio 5346 * layer would have done if it had done the transforms). 5347 */ 5348 if (error == ECKSUM) { 5349 ASSERT(BP_IS_PROTECTED(bp)); 5350 error = SET_ERROR(EIO); 5351 if ((zio->io_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5352 spa_log_error(zio->io_spa, &acb->acb_zb, 5353 &zio->io_bp->blk_birth); 5354 (void) zfs_ereport_post( 5355 FM_EREPORT_ZFS_AUTHENTICATION, 5356 zio->io_spa, NULL, &acb->acb_zb, zio, 0); 5357 } 5358 } 5359 5360 if (error != 0) { 5361 /* 5362 * Decompression or decryption failed. Set 5363 * io_error so that when we call acb_done 5364 * (below), we will indicate that the read 5365 * failed. Note that in the unusual case 5366 * where one callback is compressed and another 5367 * uncompressed, we will mark all of them 5368 * as failed, even though the uncompressed 5369 * one can't actually fail. In this case, 5370 * the hdr will not be anonymous, because 5371 * if there are multiple callbacks, it's 5372 * because multiple threads found the same 5373 * arc buf in the hash table. 5374 */ 5375 zio->io_error = error; 5376 } 5377 } 5378 5379 /* 5380 * If there are multiple callbacks, we must have the hash lock, 5381 * because the only way for multiple threads to find this hdr is 5382 * in the hash table. This ensures that if there are multiple 5383 * callbacks, the hdr is not anonymous. If it were anonymous, 5384 * we couldn't use arc_buf_destroy() in the error case below. 5385 */ 5386 ASSERT(callback_cnt < 2 || hash_lock != NULL); 5387 5388 if (zio->io_error == 0) { 5389 arc_hdr_verify(hdr, zio->io_bp); 5390 } else { 5391 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 5392 if (hdr->b_l1hdr.b_state != arc_anon) 5393 arc_change_state(arc_anon, hdr); 5394 if (HDR_IN_HASH_TABLE(hdr)) 5395 buf_hash_remove(hdr); 5396 } 5397 5398 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5399 (void) remove_reference(hdr, hdr); 5400 5401 if (hash_lock != NULL) 5402 mutex_exit(hash_lock); 5403 5404 /* execute each callback and free its structure */ 5405 while ((acb = callback_list) != NULL) { 5406 if (acb->acb_done != NULL) { 5407 if (zio->io_error != 0 && acb->acb_buf != NULL) { 5408 /* 5409 * If arc_buf_alloc_impl() fails during 5410 * decompression, the buf will still be 5411 * allocated, and needs to be freed here. 5412 */ 5413 arc_buf_destroy(acb->acb_buf, 5414 acb->acb_private); 5415 acb->acb_buf = NULL; 5416 } 5417 acb->acb_done(zio, &zio->io_bookmark, zio->io_bp, 5418 acb->acb_buf, acb->acb_private); 5419 } 5420 5421 if (acb->acb_zio_dummy != NULL) { 5422 acb->acb_zio_dummy->io_error = zio->io_error; 5423 zio_nowait(acb->acb_zio_dummy); 5424 } 5425 5426 callback_list = acb->acb_prev; 5427 if (acb->acb_wait) { 5428 mutex_enter(&acb->acb_wait_lock); 5429 acb->acb_wait_error = zio->io_error; 5430 acb->acb_wait = B_FALSE; 5431 cv_signal(&acb->acb_wait_cv); 5432 mutex_exit(&acb->acb_wait_lock); 5433 /* acb will be freed by the waiting thread. */ 5434 } else { 5435 kmem_free(acb, sizeof (arc_callback_t)); 5436 } 5437 } 5438 } 5439 5440 /* 5441 * "Read" the block at the specified DVA (in bp) via the 5442 * cache. If the block is found in the cache, invoke the provided 5443 * callback immediately and return. Note that the `zio' parameter 5444 * in the callback will be NULL in this case, since no IO was 5445 * required. If the block is not in the cache pass the read request 5446 * on to the spa with a substitute callback function, so that the 5447 * requested block will be added to the cache. 5448 * 5449 * If a read request arrives for a block that has a read in-progress, 5450 * either wait for the in-progress read to complete (and return the 5451 * results); or, if this is a read with a "done" func, add a record 5452 * to the read to invoke the "done" func when the read completes, 5453 * and return; or just return. 5454 * 5455 * arc_read_done() will invoke all the requested "done" functions 5456 * for readers of this block. 5457 */ 5458 int 5459 arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, 5460 arc_read_done_func_t *done, void *private, zio_priority_t priority, 5461 int zio_flags, arc_flags_t *arc_flags, const zbookmark_phys_t *zb) 5462 { 5463 arc_buf_hdr_t *hdr = NULL; 5464 kmutex_t *hash_lock = NULL; 5465 zio_t *rzio; 5466 uint64_t guid = spa_load_guid(spa); 5467 boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW_COMPRESS) != 0; 5468 boolean_t encrypted_read = BP_IS_ENCRYPTED(bp) && 5469 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5470 boolean_t noauth_read = BP_IS_AUTHENTICATED(bp) && 5471 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5472 boolean_t embedded_bp = !!BP_IS_EMBEDDED(bp); 5473 boolean_t no_buf = *arc_flags & ARC_FLAG_NO_BUF; 5474 arc_buf_t *buf = NULL; 5475 int rc = 0; 5476 5477 ASSERT(!embedded_bp || 5478 BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA); 5479 ASSERT(!BP_IS_HOLE(bp)); 5480 ASSERT(!BP_IS_REDACTED(bp)); 5481 5482 /* 5483 * Normally SPL_FSTRANS will already be set since kernel threads which 5484 * expect to call the DMU interfaces will set it when created. System 5485 * calls are similarly handled by setting/cleaning the bit in the 5486 * registered callback (module/os/.../zfs/zpl_*). 5487 * 5488 * External consumers such as Lustre which call the exported DMU 5489 * interfaces may not have set SPL_FSTRANS. To avoid a deadlock 5490 * on the hash_lock always set and clear the bit. 5491 */ 5492 fstrans_cookie_t cookie = spl_fstrans_mark(); 5493 top: 5494 /* 5495 * Verify the block pointer contents are reasonable. This should 5496 * always be the case since the blkptr is protected by a checksum. 5497 * However, if there is damage it's desirable to detect this early 5498 * and treat it as a checksum error. This allows an alternate blkptr 5499 * to be tried when one is available (e.g. ditto blocks). 5500 */ 5501 if (!zfs_blkptr_verify(spa, bp, (zio_flags & ZIO_FLAG_CONFIG_WRITER) ? 5502 BLK_CONFIG_HELD : BLK_CONFIG_NEEDED, BLK_VERIFY_LOG)) { 5503 rc = SET_ERROR(ECKSUM); 5504 goto done; 5505 } 5506 5507 if (!embedded_bp) { 5508 /* 5509 * Embedded BP's have no DVA and require no I/O to "read". 5510 * Create an anonymous arc buf to back it. 5511 */ 5512 hdr = buf_hash_find(guid, bp, &hash_lock); 5513 } 5514 5515 /* 5516 * Determine if we have an L1 cache hit or a cache miss. For simplicity 5517 * we maintain encrypted data separately from compressed / uncompressed 5518 * data. If the user is requesting raw encrypted data and we don't have 5519 * that in the header we will read from disk to guarantee that we can 5520 * get it even if the encryption keys aren't loaded. 5521 */ 5522 if (hdr != NULL && HDR_HAS_L1HDR(hdr) && (HDR_HAS_RABD(hdr) || 5523 (hdr->b_l1hdr.b_pabd != NULL && !encrypted_read))) { 5524 boolean_t is_data = !HDR_ISTYPE_METADATA(hdr); 5525 5526 if (HDR_IO_IN_PROGRESS(hdr)) { 5527 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 5528 mutex_exit(hash_lock); 5529 ARCSTAT_BUMP(arcstat_cached_only_in_progress); 5530 rc = SET_ERROR(ENOENT); 5531 goto done; 5532 } 5533 5534 zio_t *head_zio = hdr->b_l1hdr.b_acb->acb_zio_head; 5535 ASSERT3P(head_zio, !=, NULL); 5536 if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) && 5537 priority == ZIO_PRIORITY_SYNC_READ) { 5538 /* 5539 * This is a sync read that needs to wait for 5540 * an in-flight async read. Request that the 5541 * zio have its priority upgraded. 5542 */ 5543 zio_change_priority(head_zio, priority); 5544 DTRACE_PROBE1(arc__async__upgrade__sync, 5545 arc_buf_hdr_t *, hdr); 5546 ARCSTAT_BUMP(arcstat_async_upgrade_sync); 5547 } 5548 5549 DTRACE_PROBE1(arc__iohit, arc_buf_hdr_t *, hdr); 5550 arc_access(hdr, *arc_flags, B_FALSE); 5551 5552 /* 5553 * If there are multiple threads reading the same block 5554 * and that block is not yet in the ARC, then only one 5555 * thread will do the physical I/O and all other 5556 * threads will wait until that I/O completes. 5557 * Synchronous reads use the acb_wait_cv whereas nowait 5558 * reads register a callback. Both are signalled/called 5559 * in arc_read_done. 5560 * 5561 * Errors of the physical I/O may need to be propagated. 5562 * Synchronous read errors are returned here from 5563 * arc_read_done via acb_wait_error. Nowait reads 5564 * attach the acb_zio_dummy zio to pio and 5565 * arc_read_done propagates the physical I/O's io_error 5566 * to acb_zio_dummy, and thereby to pio. 5567 */ 5568 arc_callback_t *acb = NULL; 5569 if (done || pio || *arc_flags & ARC_FLAG_WAIT) { 5570 acb = kmem_zalloc(sizeof (arc_callback_t), 5571 KM_SLEEP); 5572 acb->acb_done = done; 5573 acb->acb_private = private; 5574 acb->acb_compressed = compressed_read; 5575 acb->acb_encrypted = encrypted_read; 5576 acb->acb_noauth = noauth_read; 5577 acb->acb_nobuf = no_buf; 5578 if (*arc_flags & ARC_FLAG_WAIT) { 5579 acb->acb_wait = B_TRUE; 5580 mutex_init(&acb->acb_wait_lock, NULL, 5581 MUTEX_DEFAULT, NULL); 5582 cv_init(&acb->acb_wait_cv, NULL, 5583 CV_DEFAULT, NULL); 5584 } 5585 acb->acb_zb = *zb; 5586 if (pio != NULL) { 5587 acb->acb_zio_dummy = zio_null(pio, 5588 spa, NULL, NULL, NULL, zio_flags); 5589 } 5590 acb->acb_zio_head = head_zio; 5591 acb->acb_next = hdr->b_l1hdr.b_acb; 5592 hdr->b_l1hdr.b_acb->acb_prev = acb; 5593 hdr->b_l1hdr.b_acb = acb; 5594 } 5595 mutex_exit(hash_lock); 5596 5597 ARCSTAT_BUMP(arcstat_iohits); 5598 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 5599 demand, prefetch, is_data, data, metadata, iohits); 5600 5601 if (*arc_flags & ARC_FLAG_WAIT) { 5602 mutex_enter(&acb->acb_wait_lock); 5603 while (acb->acb_wait) { 5604 cv_wait(&acb->acb_wait_cv, 5605 &acb->acb_wait_lock); 5606 } 5607 rc = acb->acb_wait_error; 5608 mutex_exit(&acb->acb_wait_lock); 5609 mutex_destroy(&acb->acb_wait_lock); 5610 cv_destroy(&acb->acb_wait_cv); 5611 kmem_free(acb, sizeof (arc_callback_t)); 5612 } 5613 goto out; 5614 } 5615 5616 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5617 hdr->b_l1hdr.b_state == arc_mfu || 5618 hdr->b_l1hdr.b_state == arc_uncached); 5619 5620 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5621 arc_access(hdr, *arc_flags, B_TRUE); 5622 5623 if (done && !no_buf) { 5624 ASSERT(!embedded_bp || !BP_IS_HOLE(bp)); 5625 5626 /* Get a buf with the desired data in it. */ 5627 rc = arc_buf_alloc_impl(hdr, spa, zb, private, 5628 encrypted_read, compressed_read, noauth_read, 5629 B_TRUE, &buf); 5630 if (rc == ECKSUM) { 5631 /* 5632 * Convert authentication and decryption errors 5633 * to EIO (and generate an ereport if needed) 5634 * before leaving the ARC. 5635 */ 5636 rc = SET_ERROR(EIO); 5637 if ((zio_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5638 spa_log_error(spa, zb, &hdr->b_birth); 5639 (void) zfs_ereport_post( 5640 FM_EREPORT_ZFS_AUTHENTICATION, 5641 spa, NULL, zb, NULL, 0); 5642 } 5643 } 5644 if (rc != 0) { 5645 arc_buf_destroy_impl(buf); 5646 buf = NULL; 5647 (void) remove_reference(hdr, private); 5648 } 5649 5650 /* assert any errors weren't due to unloaded keys */ 5651 ASSERT((zio_flags & ZIO_FLAG_SPECULATIVE) || 5652 rc != EACCES); 5653 } 5654 mutex_exit(hash_lock); 5655 ARCSTAT_BUMP(arcstat_hits); 5656 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 5657 demand, prefetch, is_data, data, metadata, hits); 5658 *arc_flags |= ARC_FLAG_CACHED; 5659 goto done; 5660 } else { 5661 uint64_t lsize = BP_GET_LSIZE(bp); 5662 uint64_t psize = BP_GET_PSIZE(bp); 5663 arc_callback_t *acb; 5664 vdev_t *vd = NULL; 5665 uint64_t addr = 0; 5666 boolean_t devw = B_FALSE; 5667 uint64_t size; 5668 abd_t *hdr_abd; 5669 int alloc_flags = encrypted_read ? ARC_HDR_ALLOC_RDATA : 0; 5670 arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp); 5671 5672 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 5673 if (hash_lock != NULL) 5674 mutex_exit(hash_lock); 5675 rc = SET_ERROR(ENOENT); 5676 goto done; 5677 } 5678 5679 if (hdr == NULL) { 5680 /* 5681 * This block is not in the cache or it has 5682 * embedded data. 5683 */ 5684 arc_buf_hdr_t *exists = NULL; 5685 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 5686 BP_IS_PROTECTED(bp), BP_GET_COMPRESS(bp), 0, type); 5687 5688 if (!embedded_bp) { 5689 hdr->b_dva = *BP_IDENTITY(bp); 5690 hdr->b_birth = BP_PHYSICAL_BIRTH(bp); 5691 exists = buf_hash_insert(hdr, &hash_lock); 5692 } 5693 if (exists != NULL) { 5694 /* somebody beat us to the hash insert */ 5695 mutex_exit(hash_lock); 5696 buf_discard_identity(hdr); 5697 arc_hdr_destroy(hdr); 5698 goto top; /* restart the IO request */ 5699 } 5700 } else { 5701 /* 5702 * This block is in the ghost cache or encrypted data 5703 * was requested and we didn't have it. If it was 5704 * L2-only (and thus didn't have an L1 hdr), 5705 * we realloc the header to add an L1 hdr. 5706 */ 5707 if (!HDR_HAS_L1HDR(hdr)) { 5708 hdr = arc_hdr_realloc(hdr, hdr_l2only_cache, 5709 hdr_full_cache); 5710 } 5711 5712 if (GHOST_STATE(hdr->b_l1hdr.b_state)) { 5713 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 5714 ASSERT(!HDR_HAS_RABD(hdr)); 5715 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 5716 ASSERT0(zfs_refcount_count( 5717 &hdr->b_l1hdr.b_refcnt)); 5718 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 5719 #ifdef ZFS_DEBUG 5720 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 5721 #endif 5722 } else if (HDR_IO_IN_PROGRESS(hdr)) { 5723 /* 5724 * If this header already had an IO in progress 5725 * and we are performing another IO to fetch 5726 * encrypted data we must wait until the first 5727 * IO completes so as not to confuse 5728 * arc_read_done(). This should be very rare 5729 * and so the performance impact shouldn't 5730 * matter. 5731 */ 5732 arc_callback_t *acb = kmem_zalloc( 5733 sizeof (arc_callback_t), KM_SLEEP); 5734 acb->acb_wait = B_TRUE; 5735 mutex_init(&acb->acb_wait_lock, NULL, 5736 MUTEX_DEFAULT, NULL); 5737 cv_init(&acb->acb_wait_cv, NULL, CV_DEFAULT, 5738 NULL); 5739 acb->acb_zio_head = 5740 hdr->b_l1hdr.b_acb->acb_zio_head; 5741 acb->acb_next = hdr->b_l1hdr.b_acb; 5742 hdr->b_l1hdr.b_acb->acb_prev = acb; 5743 hdr->b_l1hdr.b_acb = acb; 5744 mutex_exit(hash_lock); 5745 mutex_enter(&acb->acb_wait_lock); 5746 while (acb->acb_wait) { 5747 cv_wait(&acb->acb_wait_cv, 5748 &acb->acb_wait_lock); 5749 } 5750 mutex_exit(&acb->acb_wait_lock); 5751 mutex_destroy(&acb->acb_wait_lock); 5752 cv_destroy(&acb->acb_wait_cv); 5753 kmem_free(acb, sizeof (arc_callback_t)); 5754 goto top; 5755 } 5756 } 5757 if (*arc_flags & ARC_FLAG_UNCACHED) { 5758 arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED); 5759 if (!encrypted_read) 5760 alloc_flags |= ARC_HDR_ALLOC_LINEAR; 5761 } 5762 5763 /* 5764 * Take additional reference for IO_IN_PROGRESS. It stops 5765 * arc_access() from putting this header without any buffers 5766 * and so other references but obviously nonevictable onto 5767 * the evictable list of MRU or MFU state. 5768 */ 5769 add_reference(hdr, hdr); 5770 if (!embedded_bp) 5771 arc_access(hdr, *arc_flags, B_FALSE); 5772 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5773 arc_hdr_alloc_abd(hdr, alloc_flags); 5774 if (encrypted_read) { 5775 ASSERT(HDR_HAS_RABD(hdr)); 5776 size = HDR_GET_PSIZE(hdr); 5777 hdr_abd = hdr->b_crypt_hdr.b_rabd; 5778 zio_flags |= ZIO_FLAG_RAW; 5779 } else { 5780 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 5781 size = arc_hdr_size(hdr); 5782 hdr_abd = hdr->b_l1hdr.b_pabd; 5783 5784 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 5785 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 5786 } 5787 5788 /* 5789 * For authenticated bp's, we do not ask the ZIO layer 5790 * to authenticate them since this will cause the entire 5791 * IO to fail if the key isn't loaded. Instead, we 5792 * defer authentication until arc_buf_fill(), which will 5793 * verify the data when the key is available. 5794 */ 5795 if (BP_IS_AUTHENTICATED(bp)) 5796 zio_flags |= ZIO_FLAG_RAW_ENCRYPT; 5797 } 5798 5799 if (BP_IS_AUTHENTICATED(bp)) 5800 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 5801 if (BP_GET_LEVEL(bp) > 0) 5802 arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT); 5803 ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state)); 5804 5805 acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP); 5806 acb->acb_done = done; 5807 acb->acb_private = private; 5808 acb->acb_compressed = compressed_read; 5809 acb->acb_encrypted = encrypted_read; 5810 acb->acb_noauth = noauth_read; 5811 acb->acb_zb = *zb; 5812 5813 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 5814 hdr->b_l1hdr.b_acb = acb; 5815 5816 if (HDR_HAS_L2HDR(hdr) && 5817 (vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) { 5818 devw = hdr->b_l2hdr.b_dev->l2ad_writing; 5819 addr = hdr->b_l2hdr.b_daddr; 5820 /* 5821 * Lock out L2ARC device removal. 5822 */ 5823 if (vdev_is_dead(vd) || 5824 !spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER)) 5825 vd = NULL; 5826 } 5827 5828 /* 5829 * We count both async reads and scrub IOs as asynchronous so 5830 * that both can be upgraded in the event of a cache hit while 5831 * the read IO is still in-flight. 5832 */ 5833 if (priority == ZIO_PRIORITY_ASYNC_READ || 5834 priority == ZIO_PRIORITY_SCRUB) 5835 arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 5836 else 5837 arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 5838 5839 /* 5840 * At this point, we have a level 1 cache miss or a blkptr 5841 * with embedded data. Try again in L2ARC if possible. 5842 */ 5843 ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize); 5844 5845 /* 5846 * Skip ARC stat bump for block pointers with embedded 5847 * data. The data are read from the blkptr itself via 5848 * decode_embedded_bp_compressed(). 5849 */ 5850 if (!embedded_bp) { 5851 DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr, 5852 blkptr_t *, bp, uint64_t, lsize, 5853 zbookmark_phys_t *, zb); 5854 ARCSTAT_BUMP(arcstat_misses); 5855 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 5856 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, 5857 metadata, misses); 5858 zfs_racct_read(size, 1); 5859 } 5860 5861 /* Check if the spa even has l2 configured */ 5862 const boolean_t spa_has_l2 = l2arc_ndev != 0 && 5863 spa->spa_l2cache.sav_count > 0; 5864 5865 if (vd != NULL && spa_has_l2 && !(l2arc_norw && devw)) { 5866 /* 5867 * Read from the L2ARC if the following are true: 5868 * 1. The L2ARC vdev was previously cached. 5869 * 2. This buffer still has L2ARC metadata. 5870 * 3. This buffer isn't currently writing to the L2ARC. 5871 * 4. The L2ARC entry wasn't evicted, which may 5872 * also have invalidated the vdev. 5873 */ 5874 if (HDR_HAS_L2HDR(hdr) && 5875 !HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr)) { 5876 l2arc_read_callback_t *cb; 5877 abd_t *abd; 5878 uint64_t asize; 5879 5880 DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr); 5881 ARCSTAT_BUMP(arcstat_l2_hits); 5882 hdr->b_l2hdr.b_hits++; 5883 5884 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), 5885 KM_SLEEP); 5886 cb->l2rcb_hdr = hdr; 5887 cb->l2rcb_bp = *bp; 5888 cb->l2rcb_zb = *zb; 5889 cb->l2rcb_flags = zio_flags; 5890 5891 /* 5892 * When Compressed ARC is disabled, but the 5893 * L2ARC block is compressed, arc_hdr_size() 5894 * will have returned LSIZE rather than PSIZE. 5895 */ 5896 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 5897 !HDR_COMPRESSION_ENABLED(hdr) && 5898 HDR_GET_PSIZE(hdr) != 0) { 5899 size = HDR_GET_PSIZE(hdr); 5900 } 5901 5902 asize = vdev_psize_to_asize(vd, size); 5903 if (asize != size) { 5904 abd = abd_alloc_for_io(asize, 5905 HDR_ISTYPE_METADATA(hdr)); 5906 cb->l2rcb_abd = abd; 5907 } else { 5908 abd = hdr_abd; 5909 } 5910 5911 ASSERT(addr >= VDEV_LABEL_START_SIZE && 5912 addr + asize <= vd->vdev_psize - 5913 VDEV_LABEL_END_SIZE); 5914 5915 /* 5916 * l2arc read. The SCL_L2ARC lock will be 5917 * released by l2arc_read_done(). 5918 * Issue a null zio if the underlying buffer 5919 * was squashed to zero size by compression. 5920 */ 5921 ASSERT3U(arc_hdr_get_compress(hdr), !=, 5922 ZIO_COMPRESS_EMPTY); 5923 rzio = zio_read_phys(pio, vd, addr, 5924 asize, abd, 5925 ZIO_CHECKSUM_OFF, 5926 l2arc_read_done, cb, priority, 5927 zio_flags | ZIO_FLAG_CANFAIL | 5928 ZIO_FLAG_DONT_PROPAGATE | 5929 ZIO_FLAG_DONT_RETRY, B_FALSE); 5930 acb->acb_zio_head = rzio; 5931 5932 if (hash_lock != NULL) 5933 mutex_exit(hash_lock); 5934 5935 DTRACE_PROBE2(l2arc__read, vdev_t *, vd, 5936 zio_t *, rzio); 5937 ARCSTAT_INCR(arcstat_l2_read_bytes, 5938 HDR_GET_PSIZE(hdr)); 5939 5940 if (*arc_flags & ARC_FLAG_NOWAIT) { 5941 zio_nowait(rzio); 5942 goto out; 5943 } 5944 5945 ASSERT(*arc_flags & ARC_FLAG_WAIT); 5946 if (zio_wait(rzio) == 0) 5947 goto out; 5948 5949 /* l2arc read error; goto zio_read() */ 5950 if (hash_lock != NULL) 5951 mutex_enter(hash_lock); 5952 } else { 5953 DTRACE_PROBE1(l2arc__miss, 5954 arc_buf_hdr_t *, hdr); 5955 ARCSTAT_BUMP(arcstat_l2_misses); 5956 if (HDR_L2_WRITING(hdr)) 5957 ARCSTAT_BUMP(arcstat_l2_rw_clash); 5958 spa_config_exit(spa, SCL_L2ARC, vd); 5959 } 5960 } else { 5961 if (vd != NULL) 5962 spa_config_exit(spa, SCL_L2ARC, vd); 5963 5964 /* 5965 * Only a spa with l2 should contribute to l2 5966 * miss stats. (Including the case of having a 5967 * faulted cache device - that's also a miss.) 5968 */ 5969 if (spa_has_l2) { 5970 /* 5971 * Skip ARC stat bump for block pointers with 5972 * embedded data. The data are read from the 5973 * blkptr itself via 5974 * decode_embedded_bp_compressed(). 5975 */ 5976 if (!embedded_bp) { 5977 DTRACE_PROBE1(l2arc__miss, 5978 arc_buf_hdr_t *, hdr); 5979 ARCSTAT_BUMP(arcstat_l2_misses); 5980 } 5981 } 5982 } 5983 5984 rzio = zio_read(pio, spa, bp, hdr_abd, size, 5985 arc_read_done, hdr, priority, zio_flags, zb); 5986 acb->acb_zio_head = rzio; 5987 5988 if (hash_lock != NULL) 5989 mutex_exit(hash_lock); 5990 5991 if (*arc_flags & ARC_FLAG_WAIT) { 5992 rc = zio_wait(rzio); 5993 goto out; 5994 } 5995 5996 ASSERT(*arc_flags & ARC_FLAG_NOWAIT); 5997 zio_nowait(rzio); 5998 } 5999 6000 out: 6001 /* embedded bps don't actually go to disk */ 6002 if (!embedded_bp) 6003 spa_read_history_add(spa, zb, *arc_flags); 6004 spl_fstrans_unmark(cookie); 6005 return (rc); 6006 6007 done: 6008 if (done) 6009 done(NULL, zb, bp, buf, private); 6010 if (pio && rc != 0) { 6011 zio_t *zio = zio_null(pio, spa, NULL, NULL, NULL, zio_flags); 6012 zio->io_error = rc; 6013 zio_nowait(zio); 6014 } 6015 goto out; 6016 } 6017 6018 arc_prune_t * 6019 arc_add_prune_callback(arc_prune_func_t *func, void *private) 6020 { 6021 arc_prune_t *p; 6022 6023 p = kmem_alloc(sizeof (*p), KM_SLEEP); 6024 p->p_pfunc = func; 6025 p->p_private = private; 6026 list_link_init(&p->p_node); 6027 zfs_refcount_create(&p->p_refcnt); 6028 6029 mutex_enter(&arc_prune_mtx); 6030 zfs_refcount_add(&p->p_refcnt, &arc_prune_list); 6031 list_insert_head(&arc_prune_list, p); 6032 mutex_exit(&arc_prune_mtx); 6033 6034 return (p); 6035 } 6036 6037 void 6038 arc_remove_prune_callback(arc_prune_t *p) 6039 { 6040 boolean_t wait = B_FALSE; 6041 mutex_enter(&arc_prune_mtx); 6042 list_remove(&arc_prune_list, p); 6043 if (zfs_refcount_remove(&p->p_refcnt, &arc_prune_list) > 0) 6044 wait = B_TRUE; 6045 mutex_exit(&arc_prune_mtx); 6046 6047 /* wait for arc_prune_task to finish */ 6048 if (wait) 6049 taskq_wait_outstanding(arc_prune_taskq, 0); 6050 ASSERT0(zfs_refcount_count(&p->p_refcnt)); 6051 zfs_refcount_destroy(&p->p_refcnt); 6052 kmem_free(p, sizeof (*p)); 6053 } 6054 6055 /* 6056 * Helper function for arc_prune_async() it is responsible for safely 6057 * handling the execution of a registered arc_prune_func_t. 6058 */ 6059 static void 6060 arc_prune_task(void *ptr) 6061 { 6062 arc_prune_t *ap = (arc_prune_t *)ptr; 6063 arc_prune_func_t *func = ap->p_pfunc; 6064 6065 if (func != NULL) 6066 func(ap->p_adjust, ap->p_private); 6067 6068 zfs_refcount_remove(&ap->p_refcnt, func); 6069 } 6070 6071 /* 6072 * Notify registered consumers they must drop holds on a portion of the ARC 6073 * buffers they reference. This provides a mechanism to ensure the ARC can 6074 * honor the metadata limit and reclaim otherwise pinned ARC buffers. 6075 * 6076 * This operation is performed asynchronously so it may be safely called 6077 * in the context of the arc_reclaim_thread(). A reference is taken here 6078 * for each registered arc_prune_t and the arc_prune_task() is responsible 6079 * for releasing it once the registered arc_prune_func_t has completed. 6080 */ 6081 static void 6082 arc_prune_async(uint64_t adjust) 6083 { 6084 arc_prune_t *ap; 6085 6086 mutex_enter(&arc_prune_mtx); 6087 for (ap = list_head(&arc_prune_list); ap != NULL; 6088 ap = list_next(&arc_prune_list, ap)) { 6089 6090 if (zfs_refcount_count(&ap->p_refcnt) >= 2) 6091 continue; 6092 6093 zfs_refcount_add(&ap->p_refcnt, ap->p_pfunc); 6094 ap->p_adjust = adjust; 6095 if (taskq_dispatch(arc_prune_taskq, arc_prune_task, 6096 ap, TQ_SLEEP) == TASKQID_INVALID) { 6097 zfs_refcount_remove(&ap->p_refcnt, ap->p_pfunc); 6098 continue; 6099 } 6100 ARCSTAT_BUMP(arcstat_prune); 6101 } 6102 mutex_exit(&arc_prune_mtx); 6103 } 6104 6105 /* 6106 * Notify the arc that a block was freed, and thus will never be used again. 6107 */ 6108 void 6109 arc_freed(spa_t *spa, const blkptr_t *bp) 6110 { 6111 arc_buf_hdr_t *hdr; 6112 kmutex_t *hash_lock; 6113 uint64_t guid = spa_load_guid(spa); 6114 6115 ASSERT(!BP_IS_EMBEDDED(bp)); 6116 6117 hdr = buf_hash_find(guid, bp, &hash_lock); 6118 if (hdr == NULL) 6119 return; 6120 6121 /* 6122 * We might be trying to free a block that is still doing I/O 6123 * (i.e. prefetch) or has some other reference (i.e. a dedup-ed, 6124 * dmu_sync-ed block). A block may also have a reference if it is 6125 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would 6126 * have written the new block to its final resting place on disk but 6127 * without the dedup flag set. This would have left the hdr in the MRU 6128 * state and discoverable. When the txg finally syncs it detects that 6129 * the block was overridden in open context and issues an override I/O. 6130 * Since this is a dedup block, the override I/O will determine if the 6131 * block is already in the DDT. If so, then it will replace the io_bp 6132 * with the bp from the DDT and allow the I/O to finish. When the I/O 6133 * reaches the done callback, dbuf_write_override_done, it will 6134 * check to see if the io_bp and io_bp_override are identical. 6135 * If they are not, then it indicates that the bp was replaced with 6136 * the bp in the DDT and the override bp is freed. This allows 6137 * us to arrive here with a reference on a block that is being 6138 * freed. So if we have an I/O in progress, or a reference to 6139 * this hdr, then we don't destroy the hdr. 6140 */ 6141 if (!HDR_HAS_L1HDR(hdr) || 6142 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6143 arc_change_state(arc_anon, hdr); 6144 arc_hdr_destroy(hdr); 6145 mutex_exit(hash_lock); 6146 } else { 6147 mutex_exit(hash_lock); 6148 } 6149 6150 } 6151 6152 /* 6153 * Release this buffer from the cache, making it an anonymous buffer. This 6154 * must be done after a read and prior to modifying the buffer contents. 6155 * If the buffer has more than one reference, we must make 6156 * a new hdr for the buffer. 6157 */ 6158 void 6159 arc_release(arc_buf_t *buf, const void *tag) 6160 { 6161 arc_buf_hdr_t *hdr = buf->b_hdr; 6162 6163 /* 6164 * It would be nice to assert that if its DMU metadata (level > 6165 * 0 || it's the dnode file), then it must be syncing context. 6166 * But we don't know that information at this level. 6167 */ 6168 6169 ASSERT(HDR_HAS_L1HDR(hdr)); 6170 6171 /* 6172 * We don't grab the hash lock prior to this check, because if 6173 * the buffer's header is in the arc_anon state, it won't be 6174 * linked into the hash table. 6175 */ 6176 if (hdr->b_l1hdr.b_state == arc_anon) { 6177 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6178 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 6179 ASSERT(!HDR_HAS_L2HDR(hdr)); 6180 6181 ASSERT3P(hdr->b_l1hdr.b_buf, ==, buf); 6182 ASSERT(ARC_BUF_LAST(buf)); 6183 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1); 6184 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6185 6186 hdr->b_l1hdr.b_arc_access = 0; 6187 6188 /* 6189 * If the buf is being overridden then it may already 6190 * have a hdr that is not empty. 6191 */ 6192 buf_discard_identity(hdr); 6193 arc_buf_thaw(buf); 6194 6195 return; 6196 } 6197 6198 kmutex_t *hash_lock = HDR_LOCK(hdr); 6199 mutex_enter(hash_lock); 6200 6201 /* 6202 * This assignment is only valid as long as the hash_lock is 6203 * held, we must be careful not to reference state or the 6204 * b_state field after dropping the lock. 6205 */ 6206 arc_state_t *state = hdr->b_l1hdr.b_state; 6207 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 6208 ASSERT3P(state, !=, arc_anon); 6209 6210 /* this buffer is not on any list */ 6211 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0); 6212 6213 if (HDR_HAS_L2HDR(hdr)) { 6214 mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6215 6216 /* 6217 * We have to recheck this conditional again now that 6218 * we're holding the l2ad_mtx to prevent a race with 6219 * another thread which might be concurrently calling 6220 * l2arc_evict(). In that case, l2arc_evict() might have 6221 * destroyed the header's L2 portion as we were waiting 6222 * to acquire the l2ad_mtx. 6223 */ 6224 if (HDR_HAS_L2HDR(hdr)) 6225 arc_hdr_l2hdr_destroy(hdr); 6226 6227 mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6228 } 6229 6230 /* 6231 * Do we have more than one buf? 6232 */ 6233 if (hdr->b_l1hdr.b_buf != buf || !ARC_BUF_LAST(buf)) { 6234 arc_buf_hdr_t *nhdr; 6235 uint64_t spa = hdr->b_spa; 6236 uint64_t psize = HDR_GET_PSIZE(hdr); 6237 uint64_t lsize = HDR_GET_LSIZE(hdr); 6238 boolean_t protected = HDR_PROTECTED(hdr); 6239 enum zio_compress compress = arc_hdr_get_compress(hdr); 6240 arc_buf_contents_t type = arc_buf_type(hdr); 6241 VERIFY3U(hdr->b_type, ==, type); 6242 6243 ASSERT(hdr->b_l1hdr.b_buf != buf || buf->b_next != NULL); 6244 VERIFY3S(remove_reference(hdr, tag), >, 0); 6245 6246 if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) { 6247 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6248 ASSERT(ARC_BUF_LAST(buf)); 6249 } 6250 6251 /* 6252 * Pull the data off of this hdr and attach it to 6253 * a new anonymous hdr. Also find the last buffer 6254 * in the hdr's buffer list. 6255 */ 6256 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 6257 ASSERT3P(lastbuf, !=, NULL); 6258 6259 /* 6260 * If the current arc_buf_t and the hdr are sharing their data 6261 * buffer, then we must stop sharing that block. 6262 */ 6263 if (ARC_BUF_SHARED(buf)) { 6264 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6265 ASSERT(!arc_buf_is_shared(lastbuf)); 6266 6267 /* 6268 * First, sever the block sharing relationship between 6269 * buf and the arc_buf_hdr_t. 6270 */ 6271 arc_unshare_buf(hdr, buf); 6272 6273 /* 6274 * Now we need to recreate the hdr's b_pabd. Since we 6275 * have lastbuf handy, we try to share with it, but if 6276 * we can't then we allocate a new b_pabd and copy the 6277 * data from buf into it. 6278 */ 6279 if (arc_can_share(hdr, lastbuf)) { 6280 arc_share_buf(hdr, lastbuf); 6281 } else { 6282 arc_hdr_alloc_abd(hdr, 0); 6283 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, 6284 buf->b_data, psize); 6285 } 6286 VERIFY3P(lastbuf->b_data, !=, NULL); 6287 } else if (HDR_SHARED_DATA(hdr)) { 6288 /* 6289 * Uncompressed shared buffers are always at the end 6290 * of the list. Compressed buffers don't have the 6291 * same requirements. This makes it hard to 6292 * simply assert that the lastbuf is shared so 6293 * we rely on the hdr's compression flags to determine 6294 * if we have a compressed, shared buffer. 6295 */ 6296 ASSERT(arc_buf_is_shared(lastbuf) || 6297 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 6298 ASSERT(!arc_buf_is_shared(buf)); 6299 } 6300 6301 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 6302 ASSERT3P(state, !=, arc_l2c_only); 6303 6304 (void) zfs_refcount_remove_many(&state->arcs_size[type], 6305 arc_buf_size(buf), buf); 6306 6307 if (zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6308 ASSERT3P(state, !=, arc_l2c_only); 6309 (void) zfs_refcount_remove_many( 6310 &state->arcs_esize[type], 6311 arc_buf_size(buf), buf); 6312 } 6313 6314 arc_cksum_verify(buf); 6315 arc_buf_unwatch(buf); 6316 6317 /* if this is the last uncompressed buf free the checksum */ 6318 if (!arc_hdr_has_uncompressed_buf(hdr)) 6319 arc_cksum_free(hdr); 6320 6321 mutex_exit(hash_lock); 6322 6323 nhdr = arc_hdr_alloc(spa, psize, lsize, protected, 6324 compress, hdr->b_complevel, type); 6325 ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL); 6326 ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt)); 6327 VERIFY3U(nhdr->b_type, ==, type); 6328 ASSERT(!HDR_SHARED_DATA(nhdr)); 6329 6330 nhdr->b_l1hdr.b_buf = buf; 6331 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag); 6332 buf->b_hdr = nhdr; 6333 6334 (void) zfs_refcount_add_many(&arc_anon->arcs_size[type], 6335 arc_buf_size(buf), buf); 6336 } else { 6337 ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1); 6338 /* protected by hash lock, or hdr is on arc_anon */ 6339 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6340 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6341 hdr->b_l1hdr.b_mru_hits = 0; 6342 hdr->b_l1hdr.b_mru_ghost_hits = 0; 6343 hdr->b_l1hdr.b_mfu_hits = 0; 6344 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 6345 arc_change_state(arc_anon, hdr); 6346 hdr->b_l1hdr.b_arc_access = 0; 6347 6348 mutex_exit(hash_lock); 6349 buf_discard_identity(hdr); 6350 arc_buf_thaw(buf); 6351 } 6352 } 6353 6354 int 6355 arc_released(arc_buf_t *buf) 6356 { 6357 return (buf->b_data != NULL && 6358 buf->b_hdr->b_l1hdr.b_state == arc_anon); 6359 } 6360 6361 #ifdef ZFS_DEBUG 6362 int 6363 arc_referenced(arc_buf_t *buf) 6364 { 6365 return (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt)); 6366 } 6367 #endif 6368 6369 static void 6370 arc_write_ready(zio_t *zio) 6371 { 6372 arc_write_callback_t *callback = zio->io_private; 6373 arc_buf_t *buf = callback->awcb_buf; 6374 arc_buf_hdr_t *hdr = buf->b_hdr; 6375 blkptr_t *bp = zio->io_bp; 6376 uint64_t psize = BP_IS_HOLE(bp) ? 0 : BP_GET_PSIZE(bp); 6377 fstrans_cookie_t cookie = spl_fstrans_mark(); 6378 6379 ASSERT(HDR_HAS_L1HDR(hdr)); 6380 ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt)); 6381 ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL); 6382 6383 /* 6384 * If we're reexecuting this zio because the pool suspended, then 6385 * cleanup any state that was previously set the first time the 6386 * callback was invoked. 6387 */ 6388 if (zio->io_flags & ZIO_FLAG_REEXECUTED) { 6389 arc_cksum_free(hdr); 6390 arc_buf_unwatch(buf); 6391 if (hdr->b_l1hdr.b_pabd != NULL) { 6392 if (ARC_BUF_SHARED(buf)) { 6393 arc_unshare_buf(hdr, buf); 6394 } else { 6395 ASSERT(!arc_buf_is_shared(buf)); 6396 arc_hdr_free_abd(hdr, B_FALSE); 6397 } 6398 } 6399 6400 if (HDR_HAS_RABD(hdr)) 6401 arc_hdr_free_abd(hdr, B_TRUE); 6402 } 6403 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6404 ASSERT(!HDR_HAS_RABD(hdr)); 6405 ASSERT(!HDR_SHARED_DATA(hdr)); 6406 ASSERT(!arc_buf_is_shared(buf)); 6407 6408 callback->awcb_ready(zio, buf, callback->awcb_private); 6409 6410 if (HDR_IO_IN_PROGRESS(hdr)) { 6411 ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED); 6412 } else { 6413 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6414 add_reference(hdr, hdr); /* For IO_IN_PROGRESS. */ 6415 } 6416 6417 if (BP_IS_PROTECTED(bp)) { 6418 /* ZIL blocks are written through zio_rewrite */ 6419 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 6420 6421 if (BP_SHOULD_BYTESWAP(bp)) { 6422 if (BP_GET_LEVEL(bp) > 0) { 6423 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 6424 } else { 6425 hdr->b_l1hdr.b_byteswap = 6426 DMU_OT_BYTESWAP(BP_GET_TYPE(bp)); 6427 } 6428 } else { 6429 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 6430 } 6431 6432 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 6433 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 6434 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 6435 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 6436 hdr->b_crypt_hdr.b_iv); 6437 zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); 6438 } else { 6439 arc_hdr_clear_flags(hdr, ARC_FLAG_PROTECTED); 6440 } 6441 6442 /* 6443 * If this block was written for raw encryption but the zio layer 6444 * ended up only authenticating it, adjust the buffer flags now. 6445 */ 6446 if (BP_IS_AUTHENTICATED(bp) && ARC_BUF_ENCRYPTED(buf)) { 6447 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 6448 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6449 if (BP_GET_COMPRESS(bp) == ZIO_COMPRESS_OFF) 6450 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6451 } else if (BP_IS_HOLE(bp) && ARC_BUF_ENCRYPTED(buf)) { 6452 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6453 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6454 } 6455 6456 /* this must be done after the buffer flags are adjusted */ 6457 arc_cksum_compute(buf); 6458 6459 enum zio_compress compress; 6460 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 6461 compress = ZIO_COMPRESS_OFF; 6462 } else { 6463 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 6464 compress = BP_GET_COMPRESS(bp); 6465 } 6466 HDR_SET_PSIZE(hdr, psize); 6467 arc_hdr_set_compress(hdr, compress); 6468 hdr->b_complevel = zio->io_prop.zp_complevel; 6469 6470 if (zio->io_error != 0 || psize == 0) 6471 goto out; 6472 6473 /* 6474 * Fill the hdr with data. If the buffer is encrypted we have no choice 6475 * but to copy the data into b_radb. If the hdr is compressed, the data 6476 * we want is available from the zio, otherwise we can take it from 6477 * the buf. 6478 * 6479 * We might be able to share the buf's data with the hdr here. However, 6480 * doing so would cause the ARC to be full of linear ABDs if we write a 6481 * lot of shareable data. As a compromise, we check whether scattered 6482 * ABDs are allowed, and assume that if they are then the user wants 6483 * the ARC to be primarily filled with them regardless of the data being 6484 * written. Therefore, if they're allowed then we allocate one and copy 6485 * the data into it; otherwise, we share the data directly if we can. 6486 */ 6487 if (ARC_BUF_ENCRYPTED(buf)) { 6488 ASSERT3U(psize, >, 0); 6489 ASSERT(ARC_BUF_COMPRESSED(buf)); 6490 arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA | 6491 ARC_HDR_USE_RESERVE); 6492 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6493 } else if (!(HDR_UNCACHED(hdr) || 6494 abd_size_alloc_linear(arc_buf_size(buf))) || 6495 !arc_can_share(hdr, buf)) { 6496 /* 6497 * Ideally, we would always copy the io_abd into b_pabd, but the 6498 * user may have disabled compressed ARC, thus we must check the 6499 * hdr's compression setting rather than the io_bp's. 6500 */ 6501 if (BP_IS_ENCRYPTED(bp)) { 6502 ASSERT3U(psize, >, 0); 6503 arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA | 6504 ARC_HDR_USE_RESERVE); 6505 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6506 } else if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 6507 !ARC_BUF_COMPRESSED(buf)) { 6508 ASSERT3U(psize, >, 0); 6509 arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE); 6510 abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize); 6511 } else { 6512 ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr)); 6513 arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE); 6514 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data, 6515 arc_buf_size(buf)); 6516 } 6517 } else { 6518 ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd)); 6519 ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf)); 6520 ASSERT3P(hdr->b_l1hdr.b_buf, ==, buf); 6521 ASSERT(ARC_BUF_LAST(buf)); 6522 6523 arc_share_buf(hdr, buf); 6524 } 6525 6526 out: 6527 arc_hdr_verify(hdr, bp); 6528 spl_fstrans_unmark(cookie); 6529 } 6530 6531 static void 6532 arc_write_children_ready(zio_t *zio) 6533 { 6534 arc_write_callback_t *callback = zio->io_private; 6535 arc_buf_t *buf = callback->awcb_buf; 6536 6537 callback->awcb_children_ready(zio, buf, callback->awcb_private); 6538 } 6539 6540 static void 6541 arc_write_done(zio_t *zio) 6542 { 6543 arc_write_callback_t *callback = zio->io_private; 6544 arc_buf_t *buf = callback->awcb_buf; 6545 arc_buf_hdr_t *hdr = buf->b_hdr; 6546 6547 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6548 6549 if (zio->io_error == 0) { 6550 arc_hdr_verify(hdr, zio->io_bp); 6551 6552 if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) { 6553 buf_discard_identity(hdr); 6554 } else { 6555 hdr->b_dva = *BP_IDENTITY(zio->io_bp); 6556 hdr->b_birth = BP_PHYSICAL_BIRTH(zio->io_bp); 6557 } 6558 } else { 6559 ASSERT(HDR_EMPTY(hdr)); 6560 } 6561 6562 /* 6563 * If the block to be written was all-zero or compressed enough to be 6564 * embedded in the BP, no write was performed so there will be no 6565 * dva/birth/checksum. The buffer must therefore remain anonymous 6566 * (and uncached). 6567 */ 6568 if (!HDR_EMPTY(hdr)) { 6569 arc_buf_hdr_t *exists; 6570 kmutex_t *hash_lock; 6571 6572 ASSERT3U(zio->io_error, ==, 0); 6573 6574 arc_cksum_verify(buf); 6575 6576 exists = buf_hash_insert(hdr, &hash_lock); 6577 if (exists != NULL) { 6578 /* 6579 * This can only happen if we overwrite for 6580 * sync-to-convergence, because we remove 6581 * buffers from the hash table when we arc_free(). 6582 */ 6583 if (zio->io_flags & ZIO_FLAG_IO_REWRITE) { 6584 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6585 panic("bad overwrite, hdr=%p exists=%p", 6586 (void *)hdr, (void *)exists); 6587 ASSERT(zfs_refcount_is_zero( 6588 &exists->b_l1hdr.b_refcnt)); 6589 arc_change_state(arc_anon, exists); 6590 arc_hdr_destroy(exists); 6591 mutex_exit(hash_lock); 6592 exists = buf_hash_insert(hdr, &hash_lock); 6593 ASSERT3P(exists, ==, NULL); 6594 } else if (zio->io_flags & ZIO_FLAG_NOPWRITE) { 6595 /* nopwrite */ 6596 ASSERT(zio->io_prop.zp_nopwrite); 6597 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6598 panic("bad nopwrite, hdr=%p exists=%p", 6599 (void *)hdr, (void *)exists); 6600 } else { 6601 /* Dedup */ 6602 ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL); 6603 ASSERT(ARC_BUF_LAST(hdr->b_l1hdr.b_buf)); 6604 ASSERT(hdr->b_l1hdr.b_state == arc_anon); 6605 ASSERT(BP_GET_DEDUP(zio->io_bp)); 6606 ASSERT(BP_GET_LEVEL(zio->io_bp) == 0); 6607 } 6608 } 6609 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6610 VERIFY3S(remove_reference(hdr, hdr), >, 0); 6611 /* if it's not anon, we are doing a scrub */ 6612 if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon) 6613 arc_access(hdr, 0, B_FALSE); 6614 mutex_exit(hash_lock); 6615 } else { 6616 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6617 VERIFY3S(remove_reference(hdr, hdr), >, 0); 6618 } 6619 6620 callback->awcb_done(zio, buf, callback->awcb_private); 6621 6622 abd_free(zio->io_abd); 6623 kmem_free(callback, sizeof (arc_write_callback_t)); 6624 } 6625 6626 zio_t * 6627 arc_write(zio_t *pio, spa_t *spa, uint64_t txg, 6628 blkptr_t *bp, arc_buf_t *buf, boolean_t uncached, boolean_t l2arc, 6629 const zio_prop_t *zp, arc_write_done_func_t *ready, 6630 arc_write_done_func_t *children_ready, arc_write_done_func_t *done, 6631 void *private, zio_priority_t priority, int zio_flags, 6632 const zbookmark_phys_t *zb) 6633 { 6634 arc_buf_hdr_t *hdr = buf->b_hdr; 6635 arc_write_callback_t *callback; 6636 zio_t *zio; 6637 zio_prop_t localprop = *zp; 6638 6639 ASSERT3P(ready, !=, NULL); 6640 ASSERT3P(done, !=, NULL); 6641 ASSERT(!HDR_IO_ERROR(hdr)); 6642 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6643 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6644 ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL); 6645 if (uncached) 6646 arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED); 6647 else if (l2arc) 6648 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 6649 6650 if (ARC_BUF_ENCRYPTED(buf)) { 6651 ASSERT(ARC_BUF_COMPRESSED(buf)); 6652 localprop.zp_encrypt = B_TRUE; 6653 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6654 localprop.zp_complevel = hdr->b_complevel; 6655 localprop.zp_byteorder = 6656 (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 6657 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 6658 memcpy(localprop.zp_salt, hdr->b_crypt_hdr.b_salt, 6659 ZIO_DATA_SALT_LEN); 6660 memcpy(localprop.zp_iv, hdr->b_crypt_hdr.b_iv, 6661 ZIO_DATA_IV_LEN); 6662 memcpy(localprop.zp_mac, hdr->b_crypt_hdr.b_mac, 6663 ZIO_DATA_MAC_LEN); 6664 if (DMU_OT_IS_ENCRYPTED(localprop.zp_type)) { 6665 localprop.zp_nopwrite = B_FALSE; 6666 localprop.zp_copies = 6667 MIN(localprop.zp_copies, SPA_DVAS_PER_BP - 1); 6668 } 6669 zio_flags |= ZIO_FLAG_RAW; 6670 } else if (ARC_BUF_COMPRESSED(buf)) { 6671 ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf)); 6672 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6673 localprop.zp_complevel = hdr->b_complevel; 6674 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 6675 } 6676 callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP); 6677 callback->awcb_ready = ready; 6678 callback->awcb_children_ready = children_ready; 6679 callback->awcb_done = done; 6680 callback->awcb_private = private; 6681 callback->awcb_buf = buf; 6682 6683 /* 6684 * The hdr's b_pabd is now stale, free it now. A new data block 6685 * will be allocated when the zio pipeline calls arc_write_ready(). 6686 */ 6687 if (hdr->b_l1hdr.b_pabd != NULL) { 6688 /* 6689 * If the buf is currently sharing the data block with 6690 * the hdr then we need to break that relationship here. 6691 * The hdr will remain with a NULL data pointer and the 6692 * buf will take sole ownership of the block. 6693 */ 6694 if (ARC_BUF_SHARED(buf)) { 6695 arc_unshare_buf(hdr, buf); 6696 } else { 6697 ASSERT(!arc_buf_is_shared(buf)); 6698 arc_hdr_free_abd(hdr, B_FALSE); 6699 } 6700 VERIFY3P(buf->b_data, !=, NULL); 6701 } 6702 6703 if (HDR_HAS_RABD(hdr)) 6704 arc_hdr_free_abd(hdr, B_TRUE); 6705 6706 if (!(zio_flags & ZIO_FLAG_RAW)) 6707 arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF); 6708 6709 ASSERT(!arc_buf_is_shared(buf)); 6710 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6711 6712 zio = zio_write(pio, spa, txg, bp, 6713 abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)), 6714 HDR_GET_LSIZE(hdr), arc_buf_size(buf), &localprop, arc_write_ready, 6715 (children_ready != NULL) ? arc_write_children_ready : NULL, 6716 arc_write_done, callback, priority, zio_flags, zb); 6717 6718 return (zio); 6719 } 6720 6721 void 6722 arc_tempreserve_clear(uint64_t reserve) 6723 { 6724 atomic_add_64(&arc_tempreserve, -reserve); 6725 ASSERT((int64_t)arc_tempreserve >= 0); 6726 } 6727 6728 int 6729 arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg) 6730 { 6731 int error; 6732 uint64_t anon_size; 6733 6734 if (!arc_no_grow && 6735 reserve > arc_c/4 && 6736 reserve * 4 > (2ULL << SPA_MAXBLOCKSHIFT)) 6737 arc_c = MIN(arc_c_max, reserve * 4); 6738 6739 /* 6740 * Throttle when the calculated memory footprint for the TXG 6741 * exceeds the target ARC size. 6742 */ 6743 if (reserve > arc_c) { 6744 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve); 6745 return (SET_ERROR(ERESTART)); 6746 } 6747 6748 /* 6749 * Don't count loaned bufs as in flight dirty data to prevent long 6750 * network delays from blocking transactions that are ready to be 6751 * assigned to a txg. 6752 */ 6753 6754 /* assert that it has not wrapped around */ 6755 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 6756 6757 anon_size = MAX((int64_t) 6758 (zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]) + 6759 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]) - 6760 arc_loaned_bytes), 0); 6761 6762 /* 6763 * Writes will, almost always, require additional memory allocations 6764 * in order to compress/encrypt/etc the data. We therefore need to 6765 * make sure that there is sufficient available memory for this. 6766 */ 6767 error = arc_memory_throttle(spa, reserve, txg); 6768 if (error != 0) 6769 return (error); 6770 6771 /* 6772 * Throttle writes when the amount of dirty data in the cache 6773 * gets too large. We try to keep the cache less than half full 6774 * of dirty blocks so that our sync times don't grow too large. 6775 * 6776 * In the case of one pool being built on another pool, we want 6777 * to make sure we don't end up throttling the lower (backing) 6778 * pool when the upper pool is the majority contributor to dirty 6779 * data. To insure we make forward progress during throttling, we 6780 * also check the current pool's net dirty data and only throttle 6781 * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty 6782 * data in the cache. 6783 * 6784 * Note: if two requests come in concurrently, we might let them 6785 * both succeed, when one of them should fail. Not a huge deal. 6786 */ 6787 uint64_t total_dirty = reserve + arc_tempreserve + anon_size; 6788 uint64_t spa_dirty_anon = spa_dirty_data(spa); 6789 uint64_t rarc_c = arc_warm ? arc_c : arc_c_max; 6790 if (total_dirty > rarc_c * zfs_arc_dirty_limit_percent / 100 && 6791 anon_size > rarc_c * zfs_arc_anon_limit_percent / 100 && 6792 spa_dirty_anon > anon_size * zfs_arc_pool_dirty_percent / 100) { 6793 #ifdef ZFS_DEBUG 6794 uint64_t meta_esize = zfs_refcount_count( 6795 &arc_anon->arcs_esize[ARC_BUFC_METADATA]); 6796 uint64_t data_esize = 6797 zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 6798 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK " 6799 "anon_data=%lluK tempreserve=%lluK rarc_c=%lluK\n", 6800 (u_longlong_t)arc_tempreserve >> 10, 6801 (u_longlong_t)meta_esize >> 10, 6802 (u_longlong_t)data_esize >> 10, 6803 (u_longlong_t)reserve >> 10, 6804 (u_longlong_t)rarc_c >> 10); 6805 #endif 6806 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle); 6807 return (SET_ERROR(ERESTART)); 6808 } 6809 atomic_add_64(&arc_tempreserve, reserve); 6810 return (0); 6811 } 6812 6813 static void 6814 arc_kstat_update_state(arc_state_t *state, kstat_named_t *size, 6815 kstat_named_t *data, kstat_named_t *metadata, 6816 kstat_named_t *evict_data, kstat_named_t *evict_metadata) 6817 { 6818 data->value.ui64 = 6819 zfs_refcount_count(&state->arcs_size[ARC_BUFC_DATA]); 6820 metadata->value.ui64 = 6821 zfs_refcount_count(&state->arcs_size[ARC_BUFC_METADATA]); 6822 size->value.ui64 = data->value.ui64 + metadata->value.ui64; 6823 evict_data->value.ui64 = 6824 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]); 6825 evict_metadata->value.ui64 = 6826 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]); 6827 } 6828 6829 static int 6830 arc_kstat_update(kstat_t *ksp, int rw) 6831 { 6832 arc_stats_t *as = ksp->ks_data; 6833 6834 if (rw == KSTAT_WRITE) 6835 return (SET_ERROR(EACCES)); 6836 6837 as->arcstat_hits.value.ui64 = 6838 wmsum_value(&arc_sums.arcstat_hits); 6839 as->arcstat_iohits.value.ui64 = 6840 wmsum_value(&arc_sums.arcstat_iohits); 6841 as->arcstat_misses.value.ui64 = 6842 wmsum_value(&arc_sums.arcstat_misses); 6843 as->arcstat_demand_data_hits.value.ui64 = 6844 wmsum_value(&arc_sums.arcstat_demand_data_hits); 6845 as->arcstat_demand_data_iohits.value.ui64 = 6846 wmsum_value(&arc_sums.arcstat_demand_data_iohits); 6847 as->arcstat_demand_data_misses.value.ui64 = 6848 wmsum_value(&arc_sums.arcstat_demand_data_misses); 6849 as->arcstat_demand_metadata_hits.value.ui64 = 6850 wmsum_value(&arc_sums.arcstat_demand_metadata_hits); 6851 as->arcstat_demand_metadata_iohits.value.ui64 = 6852 wmsum_value(&arc_sums.arcstat_demand_metadata_iohits); 6853 as->arcstat_demand_metadata_misses.value.ui64 = 6854 wmsum_value(&arc_sums.arcstat_demand_metadata_misses); 6855 as->arcstat_prefetch_data_hits.value.ui64 = 6856 wmsum_value(&arc_sums.arcstat_prefetch_data_hits); 6857 as->arcstat_prefetch_data_iohits.value.ui64 = 6858 wmsum_value(&arc_sums.arcstat_prefetch_data_iohits); 6859 as->arcstat_prefetch_data_misses.value.ui64 = 6860 wmsum_value(&arc_sums.arcstat_prefetch_data_misses); 6861 as->arcstat_prefetch_metadata_hits.value.ui64 = 6862 wmsum_value(&arc_sums.arcstat_prefetch_metadata_hits); 6863 as->arcstat_prefetch_metadata_iohits.value.ui64 = 6864 wmsum_value(&arc_sums.arcstat_prefetch_metadata_iohits); 6865 as->arcstat_prefetch_metadata_misses.value.ui64 = 6866 wmsum_value(&arc_sums.arcstat_prefetch_metadata_misses); 6867 as->arcstat_mru_hits.value.ui64 = 6868 wmsum_value(&arc_sums.arcstat_mru_hits); 6869 as->arcstat_mru_ghost_hits.value.ui64 = 6870 wmsum_value(&arc_sums.arcstat_mru_ghost_hits); 6871 as->arcstat_mfu_hits.value.ui64 = 6872 wmsum_value(&arc_sums.arcstat_mfu_hits); 6873 as->arcstat_mfu_ghost_hits.value.ui64 = 6874 wmsum_value(&arc_sums.arcstat_mfu_ghost_hits); 6875 as->arcstat_uncached_hits.value.ui64 = 6876 wmsum_value(&arc_sums.arcstat_uncached_hits); 6877 as->arcstat_deleted.value.ui64 = 6878 wmsum_value(&arc_sums.arcstat_deleted); 6879 as->arcstat_mutex_miss.value.ui64 = 6880 wmsum_value(&arc_sums.arcstat_mutex_miss); 6881 as->arcstat_access_skip.value.ui64 = 6882 wmsum_value(&arc_sums.arcstat_access_skip); 6883 as->arcstat_evict_skip.value.ui64 = 6884 wmsum_value(&arc_sums.arcstat_evict_skip); 6885 as->arcstat_evict_not_enough.value.ui64 = 6886 wmsum_value(&arc_sums.arcstat_evict_not_enough); 6887 as->arcstat_evict_l2_cached.value.ui64 = 6888 wmsum_value(&arc_sums.arcstat_evict_l2_cached); 6889 as->arcstat_evict_l2_eligible.value.ui64 = 6890 wmsum_value(&arc_sums.arcstat_evict_l2_eligible); 6891 as->arcstat_evict_l2_eligible_mfu.value.ui64 = 6892 wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mfu); 6893 as->arcstat_evict_l2_eligible_mru.value.ui64 = 6894 wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mru); 6895 as->arcstat_evict_l2_ineligible.value.ui64 = 6896 wmsum_value(&arc_sums.arcstat_evict_l2_ineligible); 6897 as->arcstat_evict_l2_skip.value.ui64 = 6898 wmsum_value(&arc_sums.arcstat_evict_l2_skip); 6899 as->arcstat_hash_collisions.value.ui64 = 6900 wmsum_value(&arc_sums.arcstat_hash_collisions); 6901 as->arcstat_hash_chains.value.ui64 = 6902 wmsum_value(&arc_sums.arcstat_hash_chains); 6903 as->arcstat_size.value.ui64 = 6904 aggsum_value(&arc_sums.arcstat_size); 6905 as->arcstat_compressed_size.value.ui64 = 6906 wmsum_value(&arc_sums.arcstat_compressed_size); 6907 as->arcstat_uncompressed_size.value.ui64 = 6908 wmsum_value(&arc_sums.arcstat_uncompressed_size); 6909 as->arcstat_overhead_size.value.ui64 = 6910 wmsum_value(&arc_sums.arcstat_overhead_size); 6911 as->arcstat_hdr_size.value.ui64 = 6912 wmsum_value(&arc_sums.arcstat_hdr_size); 6913 as->arcstat_data_size.value.ui64 = 6914 wmsum_value(&arc_sums.arcstat_data_size); 6915 as->arcstat_metadata_size.value.ui64 = 6916 wmsum_value(&arc_sums.arcstat_metadata_size); 6917 as->arcstat_dbuf_size.value.ui64 = 6918 wmsum_value(&arc_sums.arcstat_dbuf_size); 6919 #if defined(COMPAT_FREEBSD11) 6920 as->arcstat_other_size.value.ui64 = 6921 wmsum_value(&arc_sums.arcstat_bonus_size) + 6922 wmsum_value(&arc_sums.arcstat_dnode_size) + 6923 wmsum_value(&arc_sums.arcstat_dbuf_size); 6924 #endif 6925 6926 arc_kstat_update_state(arc_anon, 6927 &as->arcstat_anon_size, 6928 &as->arcstat_anon_data, 6929 &as->arcstat_anon_metadata, 6930 &as->arcstat_anon_evictable_data, 6931 &as->arcstat_anon_evictable_metadata); 6932 arc_kstat_update_state(arc_mru, 6933 &as->arcstat_mru_size, 6934 &as->arcstat_mru_data, 6935 &as->arcstat_mru_metadata, 6936 &as->arcstat_mru_evictable_data, 6937 &as->arcstat_mru_evictable_metadata); 6938 arc_kstat_update_state(arc_mru_ghost, 6939 &as->arcstat_mru_ghost_size, 6940 &as->arcstat_mru_ghost_data, 6941 &as->arcstat_mru_ghost_metadata, 6942 &as->arcstat_mru_ghost_evictable_data, 6943 &as->arcstat_mru_ghost_evictable_metadata); 6944 arc_kstat_update_state(arc_mfu, 6945 &as->arcstat_mfu_size, 6946 &as->arcstat_mfu_data, 6947 &as->arcstat_mfu_metadata, 6948 &as->arcstat_mfu_evictable_data, 6949 &as->arcstat_mfu_evictable_metadata); 6950 arc_kstat_update_state(arc_mfu_ghost, 6951 &as->arcstat_mfu_ghost_size, 6952 &as->arcstat_mfu_ghost_data, 6953 &as->arcstat_mfu_ghost_metadata, 6954 &as->arcstat_mfu_ghost_evictable_data, 6955 &as->arcstat_mfu_ghost_evictable_metadata); 6956 arc_kstat_update_state(arc_uncached, 6957 &as->arcstat_uncached_size, 6958 &as->arcstat_uncached_data, 6959 &as->arcstat_uncached_metadata, 6960 &as->arcstat_uncached_evictable_data, 6961 &as->arcstat_uncached_evictable_metadata); 6962 6963 as->arcstat_dnode_size.value.ui64 = 6964 wmsum_value(&arc_sums.arcstat_dnode_size); 6965 as->arcstat_bonus_size.value.ui64 = 6966 wmsum_value(&arc_sums.arcstat_bonus_size); 6967 as->arcstat_l2_hits.value.ui64 = 6968 wmsum_value(&arc_sums.arcstat_l2_hits); 6969 as->arcstat_l2_misses.value.ui64 = 6970 wmsum_value(&arc_sums.arcstat_l2_misses); 6971 as->arcstat_l2_prefetch_asize.value.ui64 = 6972 wmsum_value(&arc_sums.arcstat_l2_prefetch_asize); 6973 as->arcstat_l2_mru_asize.value.ui64 = 6974 wmsum_value(&arc_sums.arcstat_l2_mru_asize); 6975 as->arcstat_l2_mfu_asize.value.ui64 = 6976 wmsum_value(&arc_sums.arcstat_l2_mfu_asize); 6977 as->arcstat_l2_bufc_data_asize.value.ui64 = 6978 wmsum_value(&arc_sums.arcstat_l2_bufc_data_asize); 6979 as->arcstat_l2_bufc_metadata_asize.value.ui64 = 6980 wmsum_value(&arc_sums.arcstat_l2_bufc_metadata_asize); 6981 as->arcstat_l2_feeds.value.ui64 = 6982 wmsum_value(&arc_sums.arcstat_l2_feeds); 6983 as->arcstat_l2_rw_clash.value.ui64 = 6984 wmsum_value(&arc_sums.arcstat_l2_rw_clash); 6985 as->arcstat_l2_read_bytes.value.ui64 = 6986 wmsum_value(&arc_sums.arcstat_l2_read_bytes); 6987 as->arcstat_l2_write_bytes.value.ui64 = 6988 wmsum_value(&arc_sums.arcstat_l2_write_bytes); 6989 as->arcstat_l2_writes_sent.value.ui64 = 6990 wmsum_value(&arc_sums.arcstat_l2_writes_sent); 6991 as->arcstat_l2_writes_done.value.ui64 = 6992 wmsum_value(&arc_sums.arcstat_l2_writes_done); 6993 as->arcstat_l2_writes_error.value.ui64 = 6994 wmsum_value(&arc_sums.arcstat_l2_writes_error); 6995 as->arcstat_l2_writes_lock_retry.value.ui64 = 6996 wmsum_value(&arc_sums.arcstat_l2_writes_lock_retry); 6997 as->arcstat_l2_evict_lock_retry.value.ui64 = 6998 wmsum_value(&arc_sums.arcstat_l2_evict_lock_retry); 6999 as->arcstat_l2_evict_reading.value.ui64 = 7000 wmsum_value(&arc_sums.arcstat_l2_evict_reading); 7001 as->arcstat_l2_evict_l1cached.value.ui64 = 7002 wmsum_value(&arc_sums.arcstat_l2_evict_l1cached); 7003 as->arcstat_l2_free_on_write.value.ui64 = 7004 wmsum_value(&arc_sums.arcstat_l2_free_on_write); 7005 as->arcstat_l2_abort_lowmem.value.ui64 = 7006 wmsum_value(&arc_sums.arcstat_l2_abort_lowmem); 7007 as->arcstat_l2_cksum_bad.value.ui64 = 7008 wmsum_value(&arc_sums.arcstat_l2_cksum_bad); 7009 as->arcstat_l2_io_error.value.ui64 = 7010 wmsum_value(&arc_sums.arcstat_l2_io_error); 7011 as->arcstat_l2_lsize.value.ui64 = 7012 wmsum_value(&arc_sums.arcstat_l2_lsize); 7013 as->arcstat_l2_psize.value.ui64 = 7014 wmsum_value(&arc_sums.arcstat_l2_psize); 7015 as->arcstat_l2_hdr_size.value.ui64 = 7016 aggsum_value(&arc_sums.arcstat_l2_hdr_size); 7017 as->arcstat_l2_log_blk_writes.value.ui64 = 7018 wmsum_value(&arc_sums.arcstat_l2_log_blk_writes); 7019 as->arcstat_l2_log_blk_asize.value.ui64 = 7020 wmsum_value(&arc_sums.arcstat_l2_log_blk_asize); 7021 as->arcstat_l2_log_blk_count.value.ui64 = 7022 wmsum_value(&arc_sums.arcstat_l2_log_blk_count); 7023 as->arcstat_l2_rebuild_success.value.ui64 = 7024 wmsum_value(&arc_sums.arcstat_l2_rebuild_success); 7025 as->arcstat_l2_rebuild_abort_unsupported.value.ui64 = 7026 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_unsupported); 7027 as->arcstat_l2_rebuild_abort_io_errors.value.ui64 = 7028 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_io_errors); 7029 as->arcstat_l2_rebuild_abort_dh_errors.value.ui64 = 7030 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); 7031 as->arcstat_l2_rebuild_abort_cksum_lb_errors.value.ui64 = 7032 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); 7033 as->arcstat_l2_rebuild_abort_lowmem.value.ui64 = 7034 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_lowmem); 7035 as->arcstat_l2_rebuild_size.value.ui64 = 7036 wmsum_value(&arc_sums.arcstat_l2_rebuild_size); 7037 as->arcstat_l2_rebuild_asize.value.ui64 = 7038 wmsum_value(&arc_sums.arcstat_l2_rebuild_asize); 7039 as->arcstat_l2_rebuild_bufs.value.ui64 = 7040 wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs); 7041 as->arcstat_l2_rebuild_bufs_precached.value.ui64 = 7042 wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs_precached); 7043 as->arcstat_l2_rebuild_log_blks.value.ui64 = 7044 wmsum_value(&arc_sums.arcstat_l2_rebuild_log_blks); 7045 as->arcstat_memory_throttle_count.value.ui64 = 7046 wmsum_value(&arc_sums.arcstat_memory_throttle_count); 7047 as->arcstat_memory_direct_count.value.ui64 = 7048 wmsum_value(&arc_sums.arcstat_memory_direct_count); 7049 as->arcstat_memory_indirect_count.value.ui64 = 7050 wmsum_value(&arc_sums.arcstat_memory_indirect_count); 7051 7052 as->arcstat_memory_all_bytes.value.ui64 = 7053 arc_all_memory(); 7054 as->arcstat_memory_free_bytes.value.ui64 = 7055 arc_free_memory(); 7056 as->arcstat_memory_available_bytes.value.i64 = 7057 arc_available_memory(); 7058 7059 as->arcstat_prune.value.ui64 = 7060 wmsum_value(&arc_sums.arcstat_prune); 7061 as->arcstat_meta_used.value.ui64 = 7062 wmsum_value(&arc_sums.arcstat_meta_used); 7063 as->arcstat_async_upgrade_sync.value.ui64 = 7064 wmsum_value(&arc_sums.arcstat_async_upgrade_sync); 7065 as->arcstat_predictive_prefetch.value.ui64 = 7066 wmsum_value(&arc_sums.arcstat_predictive_prefetch); 7067 as->arcstat_demand_hit_predictive_prefetch.value.ui64 = 7068 wmsum_value(&arc_sums.arcstat_demand_hit_predictive_prefetch); 7069 as->arcstat_demand_iohit_predictive_prefetch.value.ui64 = 7070 wmsum_value(&arc_sums.arcstat_demand_iohit_predictive_prefetch); 7071 as->arcstat_prescient_prefetch.value.ui64 = 7072 wmsum_value(&arc_sums.arcstat_prescient_prefetch); 7073 as->arcstat_demand_hit_prescient_prefetch.value.ui64 = 7074 wmsum_value(&arc_sums.arcstat_demand_hit_prescient_prefetch); 7075 as->arcstat_demand_iohit_prescient_prefetch.value.ui64 = 7076 wmsum_value(&arc_sums.arcstat_demand_iohit_prescient_prefetch); 7077 as->arcstat_raw_size.value.ui64 = 7078 wmsum_value(&arc_sums.arcstat_raw_size); 7079 as->arcstat_cached_only_in_progress.value.ui64 = 7080 wmsum_value(&arc_sums.arcstat_cached_only_in_progress); 7081 as->arcstat_abd_chunk_waste_size.value.ui64 = 7082 wmsum_value(&arc_sums.arcstat_abd_chunk_waste_size); 7083 7084 return (0); 7085 } 7086 7087 /* 7088 * This function *must* return indices evenly distributed between all 7089 * sublists of the multilist. This is needed due to how the ARC eviction 7090 * code is laid out; arc_evict_state() assumes ARC buffers are evenly 7091 * distributed between all sublists and uses this assumption when 7092 * deciding which sublist to evict from and how much to evict from it. 7093 */ 7094 static unsigned int 7095 arc_state_multilist_index_func(multilist_t *ml, void *obj) 7096 { 7097 arc_buf_hdr_t *hdr = obj; 7098 7099 /* 7100 * We rely on b_dva to generate evenly distributed index 7101 * numbers using buf_hash below. So, as an added precaution, 7102 * let's make sure we never add empty buffers to the arc lists. 7103 */ 7104 ASSERT(!HDR_EMPTY(hdr)); 7105 7106 /* 7107 * The assumption here, is the hash value for a given 7108 * arc_buf_hdr_t will remain constant throughout its lifetime 7109 * (i.e. its b_spa, b_dva, and b_birth fields don't change). 7110 * Thus, we don't need to store the header's sublist index 7111 * on insertion, as this index can be recalculated on removal. 7112 * 7113 * Also, the low order bits of the hash value are thought to be 7114 * distributed evenly. Otherwise, in the case that the multilist 7115 * has a power of two number of sublists, each sublists' usage 7116 * would not be evenly distributed. In this context full 64bit 7117 * division would be a waste of time, so limit it to 32 bits. 7118 */ 7119 return ((unsigned int)buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) % 7120 multilist_get_num_sublists(ml)); 7121 } 7122 7123 static unsigned int 7124 arc_state_l2c_multilist_index_func(multilist_t *ml, void *obj) 7125 { 7126 panic("Header %p insert into arc_l2c_only %p", obj, ml); 7127 } 7128 7129 #define WARN_IF_TUNING_IGNORED(tuning, value, do_warn) do { \ 7130 if ((do_warn) && (tuning) && ((tuning) != (value))) { \ 7131 cmn_err(CE_WARN, \ 7132 "ignoring tunable %s (using %llu instead)", \ 7133 (#tuning), (u_longlong_t)(value)); \ 7134 } \ 7135 } while (0) 7136 7137 /* 7138 * Called during module initialization and periodically thereafter to 7139 * apply reasonable changes to the exposed performance tunings. Can also be 7140 * called explicitly by param_set_arc_*() functions when ARC tunables are 7141 * updated manually. Non-zero zfs_* values which differ from the currently set 7142 * values will be applied. 7143 */ 7144 void 7145 arc_tuning_update(boolean_t verbose) 7146 { 7147 uint64_t allmem = arc_all_memory(); 7148 7149 /* Valid range: 32M - <arc_c_max> */ 7150 if ((zfs_arc_min) && (zfs_arc_min != arc_c_min) && 7151 (zfs_arc_min >= 2ULL << SPA_MAXBLOCKSHIFT) && 7152 (zfs_arc_min <= arc_c_max)) { 7153 arc_c_min = zfs_arc_min; 7154 arc_c = MAX(arc_c, arc_c_min); 7155 } 7156 WARN_IF_TUNING_IGNORED(zfs_arc_min, arc_c_min, verbose); 7157 7158 /* Valid range: 64M - <all physical memory> */ 7159 if ((zfs_arc_max) && (zfs_arc_max != arc_c_max) && 7160 (zfs_arc_max >= MIN_ARC_MAX) && (zfs_arc_max < allmem) && 7161 (zfs_arc_max > arc_c_min)) { 7162 arc_c_max = zfs_arc_max; 7163 arc_c = MIN(arc_c, arc_c_max); 7164 if (arc_dnode_limit > arc_c_max) 7165 arc_dnode_limit = arc_c_max; 7166 } 7167 WARN_IF_TUNING_IGNORED(zfs_arc_max, arc_c_max, verbose); 7168 7169 /* Valid range: 0 - <all physical memory> */ 7170 arc_dnode_limit = zfs_arc_dnode_limit ? zfs_arc_dnode_limit : 7171 MIN(zfs_arc_dnode_limit_percent, 100) * arc_c_max / 100; 7172 WARN_IF_TUNING_IGNORED(zfs_arc_dnode_limit, arc_dnode_limit, verbose); 7173 7174 /* Valid range: 1 - N */ 7175 if (zfs_arc_grow_retry) 7176 arc_grow_retry = zfs_arc_grow_retry; 7177 7178 /* Valid range: 1 - N */ 7179 if (zfs_arc_shrink_shift) { 7180 arc_shrink_shift = zfs_arc_shrink_shift; 7181 arc_no_grow_shift = MIN(arc_no_grow_shift, arc_shrink_shift -1); 7182 } 7183 7184 /* Valid range: 1 - N ms */ 7185 if (zfs_arc_min_prefetch_ms) 7186 arc_min_prefetch_ms = zfs_arc_min_prefetch_ms; 7187 7188 /* Valid range: 1 - N ms */ 7189 if (zfs_arc_min_prescient_prefetch_ms) { 7190 arc_min_prescient_prefetch_ms = 7191 zfs_arc_min_prescient_prefetch_ms; 7192 } 7193 7194 /* Valid range: 0 - 100 */ 7195 if (zfs_arc_lotsfree_percent <= 100) 7196 arc_lotsfree_percent = zfs_arc_lotsfree_percent; 7197 WARN_IF_TUNING_IGNORED(zfs_arc_lotsfree_percent, arc_lotsfree_percent, 7198 verbose); 7199 7200 /* Valid range: 0 - <all physical memory> */ 7201 if ((zfs_arc_sys_free) && (zfs_arc_sys_free != arc_sys_free)) 7202 arc_sys_free = MIN(zfs_arc_sys_free, allmem); 7203 WARN_IF_TUNING_IGNORED(zfs_arc_sys_free, arc_sys_free, verbose); 7204 } 7205 7206 static void 7207 arc_state_multilist_init(multilist_t *ml, 7208 multilist_sublist_index_func_t *index_func, int *maxcountp) 7209 { 7210 multilist_create(ml, sizeof (arc_buf_hdr_t), 7211 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), index_func); 7212 *maxcountp = MAX(*maxcountp, multilist_get_num_sublists(ml)); 7213 } 7214 7215 static void 7216 arc_state_init(void) 7217 { 7218 int num_sublists = 0; 7219 7220 arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_METADATA], 7221 arc_state_multilist_index_func, &num_sublists); 7222 arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_DATA], 7223 arc_state_multilist_index_func, &num_sublists); 7224 arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA], 7225 arc_state_multilist_index_func, &num_sublists); 7226 arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA], 7227 arc_state_multilist_index_func, &num_sublists); 7228 arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_METADATA], 7229 arc_state_multilist_index_func, &num_sublists); 7230 arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_DATA], 7231 arc_state_multilist_index_func, &num_sublists); 7232 arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA], 7233 arc_state_multilist_index_func, &num_sublists); 7234 arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA], 7235 arc_state_multilist_index_func, &num_sublists); 7236 arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_METADATA], 7237 arc_state_multilist_index_func, &num_sublists); 7238 arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_DATA], 7239 arc_state_multilist_index_func, &num_sublists); 7240 7241 /* 7242 * L2 headers should never be on the L2 state list since they don't 7243 * have L1 headers allocated. Special index function asserts that. 7244 */ 7245 arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA], 7246 arc_state_l2c_multilist_index_func, &num_sublists); 7247 arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_DATA], 7248 arc_state_l2c_multilist_index_func, &num_sublists); 7249 7250 /* 7251 * Keep track of the number of markers needed to reclaim buffers from 7252 * any ARC state. The markers will be pre-allocated so as to minimize 7253 * the number of memory allocations performed by the eviction thread. 7254 */ 7255 arc_state_evict_marker_count = num_sublists; 7256 7257 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7258 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7259 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7260 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7261 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7262 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7263 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7264 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7265 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7266 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7267 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7268 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7269 zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]); 7270 zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_DATA]); 7271 7272 zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_DATA]); 7273 zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 7274 zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_DATA]); 7275 zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_METADATA]); 7276 zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]); 7277 zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]); 7278 zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 7279 zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 7280 zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]); 7281 zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]); 7282 zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]); 7283 zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]); 7284 zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_DATA]); 7285 zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_METADATA]); 7286 7287 wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA], 0); 7288 wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA], 0); 7289 wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA], 0); 7290 wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA], 0); 7291 7292 wmsum_init(&arc_sums.arcstat_hits, 0); 7293 wmsum_init(&arc_sums.arcstat_iohits, 0); 7294 wmsum_init(&arc_sums.arcstat_misses, 0); 7295 wmsum_init(&arc_sums.arcstat_demand_data_hits, 0); 7296 wmsum_init(&arc_sums.arcstat_demand_data_iohits, 0); 7297 wmsum_init(&arc_sums.arcstat_demand_data_misses, 0); 7298 wmsum_init(&arc_sums.arcstat_demand_metadata_hits, 0); 7299 wmsum_init(&arc_sums.arcstat_demand_metadata_iohits, 0); 7300 wmsum_init(&arc_sums.arcstat_demand_metadata_misses, 0); 7301 wmsum_init(&arc_sums.arcstat_prefetch_data_hits, 0); 7302 wmsum_init(&arc_sums.arcstat_prefetch_data_iohits, 0); 7303 wmsum_init(&arc_sums.arcstat_prefetch_data_misses, 0); 7304 wmsum_init(&arc_sums.arcstat_prefetch_metadata_hits, 0); 7305 wmsum_init(&arc_sums.arcstat_prefetch_metadata_iohits, 0); 7306 wmsum_init(&arc_sums.arcstat_prefetch_metadata_misses, 0); 7307 wmsum_init(&arc_sums.arcstat_mru_hits, 0); 7308 wmsum_init(&arc_sums.arcstat_mru_ghost_hits, 0); 7309 wmsum_init(&arc_sums.arcstat_mfu_hits, 0); 7310 wmsum_init(&arc_sums.arcstat_mfu_ghost_hits, 0); 7311 wmsum_init(&arc_sums.arcstat_uncached_hits, 0); 7312 wmsum_init(&arc_sums.arcstat_deleted, 0); 7313 wmsum_init(&arc_sums.arcstat_mutex_miss, 0); 7314 wmsum_init(&arc_sums.arcstat_access_skip, 0); 7315 wmsum_init(&arc_sums.arcstat_evict_skip, 0); 7316 wmsum_init(&arc_sums.arcstat_evict_not_enough, 0); 7317 wmsum_init(&arc_sums.arcstat_evict_l2_cached, 0); 7318 wmsum_init(&arc_sums.arcstat_evict_l2_eligible, 0); 7319 wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mfu, 0); 7320 wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mru, 0); 7321 wmsum_init(&arc_sums.arcstat_evict_l2_ineligible, 0); 7322 wmsum_init(&arc_sums.arcstat_evict_l2_skip, 0); 7323 wmsum_init(&arc_sums.arcstat_hash_collisions, 0); 7324 wmsum_init(&arc_sums.arcstat_hash_chains, 0); 7325 aggsum_init(&arc_sums.arcstat_size, 0); 7326 wmsum_init(&arc_sums.arcstat_compressed_size, 0); 7327 wmsum_init(&arc_sums.arcstat_uncompressed_size, 0); 7328 wmsum_init(&arc_sums.arcstat_overhead_size, 0); 7329 wmsum_init(&arc_sums.arcstat_hdr_size, 0); 7330 wmsum_init(&arc_sums.arcstat_data_size, 0); 7331 wmsum_init(&arc_sums.arcstat_metadata_size, 0); 7332 wmsum_init(&arc_sums.arcstat_dbuf_size, 0); 7333 wmsum_init(&arc_sums.arcstat_dnode_size, 0); 7334 wmsum_init(&arc_sums.arcstat_bonus_size, 0); 7335 wmsum_init(&arc_sums.arcstat_l2_hits, 0); 7336 wmsum_init(&arc_sums.arcstat_l2_misses, 0); 7337 wmsum_init(&arc_sums.arcstat_l2_prefetch_asize, 0); 7338 wmsum_init(&arc_sums.arcstat_l2_mru_asize, 0); 7339 wmsum_init(&arc_sums.arcstat_l2_mfu_asize, 0); 7340 wmsum_init(&arc_sums.arcstat_l2_bufc_data_asize, 0); 7341 wmsum_init(&arc_sums.arcstat_l2_bufc_metadata_asize, 0); 7342 wmsum_init(&arc_sums.arcstat_l2_feeds, 0); 7343 wmsum_init(&arc_sums.arcstat_l2_rw_clash, 0); 7344 wmsum_init(&arc_sums.arcstat_l2_read_bytes, 0); 7345 wmsum_init(&arc_sums.arcstat_l2_write_bytes, 0); 7346 wmsum_init(&arc_sums.arcstat_l2_writes_sent, 0); 7347 wmsum_init(&arc_sums.arcstat_l2_writes_done, 0); 7348 wmsum_init(&arc_sums.arcstat_l2_writes_error, 0); 7349 wmsum_init(&arc_sums.arcstat_l2_writes_lock_retry, 0); 7350 wmsum_init(&arc_sums.arcstat_l2_evict_lock_retry, 0); 7351 wmsum_init(&arc_sums.arcstat_l2_evict_reading, 0); 7352 wmsum_init(&arc_sums.arcstat_l2_evict_l1cached, 0); 7353 wmsum_init(&arc_sums.arcstat_l2_free_on_write, 0); 7354 wmsum_init(&arc_sums.arcstat_l2_abort_lowmem, 0); 7355 wmsum_init(&arc_sums.arcstat_l2_cksum_bad, 0); 7356 wmsum_init(&arc_sums.arcstat_l2_io_error, 0); 7357 wmsum_init(&arc_sums.arcstat_l2_lsize, 0); 7358 wmsum_init(&arc_sums.arcstat_l2_psize, 0); 7359 aggsum_init(&arc_sums.arcstat_l2_hdr_size, 0); 7360 wmsum_init(&arc_sums.arcstat_l2_log_blk_writes, 0); 7361 wmsum_init(&arc_sums.arcstat_l2_log_blk_asize, 0); 7362 wmsum_init(&arc_sums.arcstat_l2_log_blk_count, 0); 7363 wmsum_init(&arc_sums.arcstat_l2_rebuild_success, 0); 7364 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_unsupported, 0); 7365 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_io_errors, 0); 7366 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_dh_errors, 0); 7367 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors, 0); 7368 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_lowmem, 0); 7369 wmsum_init(&arc_sums.arcstat_l2_rebuild_size, 0); 7370 wmsum_init(&arc_sums.arcstat_l2_rebuild_asize, 0); 7371 wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs, 0); 7372 wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs_precached, 0); 7373 wmsum_init(&arc_sums.arcstat_l2_rebuild_log_blks, 0); 7374 wmsum_init(&arc_sums.arcstat_memory_throttle_count, 0); 7375 wmsum_init(&arc_sums.arcstat_memory_direct_count, 0); 7376 wmsum_init(&arc_sums.arcstat_memory_indirect_count, 0); 7377 wmsum_init(&arc_sums.arcstat_prune, 0); 7378 wmsum_init(&arc_sums.arcstat_meta_used, 0); 7379 wmsum_init(&arc_sums.arcstat_async_upgrade_sync, 0); 7380 wmsum_init(&arc_sums.arcstat_predictive_prefetch, 0); 7381 wmsum_init(&arc_sums.arcstat_demand_hit_predictive_prefetch, 0); 7382 wmsum_init(&arc_sums.arcstat_demand_iohit_predictive_prefetch, 0); 7383 wmsum_init(&arc_sums.arcstat_prescient_prefetch, 0); 7384 wmsum_init(&arc_sums.arcstat_demand_hit_prescient_prefetch, 0); 7385 wmsum_init(&arc_sums.arcstat_demand_iohit_prescient_prefetch, 0); 7386 wmsum_init(&arc_sums.arcstat_raw_size, 0); 7387 wmsum_init(&arc_sums.arcstat_cached_only_in_progress, 0); 7388 wmsum_init(&arc_sums.arcstat_abd_chunk_waste_size, 0); 7389 7390 arc_anon->arcs_state = ARC_STATE_ANON; 7391 arc_mru->arcs_state = ARC_STATE_MRU; 7392 arc_mru_ghost->arcs_state = ARC_STATE_MRU_GHOST; 7393 arc_mfu->arcs_state = ARC_STATE_MFU; 7394 arc_mfu_ghost->arcs_state = ARC_STATE_MFU_GHOST; 7395 arc_l2c_only->arcs_state = ARC_STATE_L2C_ONLY; 7396 arc_uncached->arcs_state = ARC_STATE_UNCACHED; 7397 } 7398 7399 static void 7400 arc_state_fini(void) 7401 { 7402 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7403 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7404 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7405 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7406 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7407 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7408 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7409 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7410 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7411 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7412 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7413 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7414 zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]); 7415 zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_DATA]); 7416 7417 zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_DATA]); 7418 zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 7419 zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_DATA]); 7420 zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_METADATA]); 7421 zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]); 7422 zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]); 7423 zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 7424 zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 7425 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]); 7426 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]); 7427 zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]); 7428 zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]); 7429 zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_DATA]); 7430 zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_METADATA]); 7431 7432 multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_METADATA]); 7433 multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]); 7434 multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_METADATA]); 7435 multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]); 7436 multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_DATA]); 7437 multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA]); 7438 multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_DATA]); 7439 multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]); 7440 multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA]); 7441 multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_DATA]); 7442 multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_METADATA]); 7443 multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_DATA]); 7444 7445 wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]); 7446 wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]); 7447 wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]); 7448 wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]); 7449 7450 wmsum_fini(&arc_sums.arcstat_hits); 7451 wmsum_fini(&arc_sums.arcstat_iohits); 7452 wmsum_fini(&arc_sums.arcstat_misses); 7453 wmsum_fini(&arc_sums.arcstat_demand_data_hits); 7454 wmsum_fini(&arc_sums.arcstat_demand_data_iohits); 7455 wmsum_fini(&arc_sums.arcstat_demand_data_misses); 7456 wmsum_fini(&arc_sums.arcstat_demand_metadata_hits); 7457 wmsum_fini(&arc_sums.arcstat_demand_metadata_iohits); 7458 wmsum_fini(&arc_sums.arcstat_demand_metadata_misses); 7459 wmsum_fini(&arc_sums.arcstat_prefetch_data_hits); 7460 wmsum_fini(&arc_sums.arcstat_prefetch_data_iohits); 7461 wmsum_fini(&arc_sums.arcstat_prefetch_data_misses); 7462 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_hits); 7463 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_iohits); 7464 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_misses); 7465 wmsum_fini(&arc_sums.arcstat_mru_hits); 7466 wmsum_fini(&arc_sums.arcstat_mru_ghost_hits); 7467 wmsum_fini(&arc_sums.arcstat_mfu_hits); 7468 wmsum_fini(&arc_sums.arcstat_mfu_ghost_hits); 7469 wmsum_fini(&arc_sums.arcstat_uncached_hits); 7470 wmsum_fini(&arc_sums.arcstat_deleted); 7471 wmsum_fini(&arc_sums.arcstat_mutex_miss); 7472 wmsum_fini(&arc_sums.arcstat_access_skip); 7473 wmsum_fini(&arc_sums.arcstat_evict_skip); 7474 wmsum_fini(&arc_sums.arcstat_evict_not_enough); 7475 wmsum_fini(&arc_sums.arcstat_evict_l2_cached); 7476 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible); 7477 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mfu); 7478 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mru); 7479 wmsum_fini(&arc_sums.arcstat_evict_l2_ineligible); 7480 wmsum_fini(&arc_sums.arcstat_evict_l2_skip); 7481 wmsum_fini(&arc_sums.arcstat_hash_collisions); 7482 wmsum_fini(&arc_sums.arcstat_hash_chains); 7483 aggsum_fini(&arc_sums.arcstat_size); 7484 wmsum_fini(&arc_sums.arcstat_compressed_size); 7485 wmsum_fini(&arc_sums.arcstat_uncompressed_size); 7486 wmsum_fini(&arc_sums.arcstat_overhead_size); 7487 wmsum_fini(&arc_sums.arcstat_hdr_size); 7488 wmsum_fini(&arc_sums.arcstat_data_size); 7489 wmsum_fini(&arc_sums.arcstat_metadata_size); 7490 wmsum_fini(&arc_sums.arcstat_dbuf_size); 7491 wmsum_fini(&arc_sums.arcstat_dnode_size); 7492 wmsum_fini(&arc_sums.arcstat_bonus_size); 7493 wmsum_fini(&arc_sums.arcstat_l2_hits); 7494 wmsum_fini(&arc_sums.arcstat_l2_misses); 7495 wmsum_fini(&arc_sums.arcstat_l2_prefetch_asize); 7496 wmsum_fini(&arc_sums.arcstat_l2_mru_asize); 7497 wmsum_fini(&arc_sums.arcstat_l2_mfu_asize); 7498 wmsum_fini(&arc_sums.arcstat_l2_bufc_data_asize); 7499 wmsum_fini(&arc_sums.arcstat_l2_bufc_metadata_asize); 7500 wmsum_fini(&arc_sums.arcstat_l2_feeds); 7501 wmsum_fini(&arc_sums.arcstat_l2_rw_clash); 7502 wmsum_fini(&arc_sums.arcstat_l2_read_bytes); 7503 wmsum_fini(&arc_sums.arcstat_l2_write_bytes); 7504 wmsum_fini(&arc_sums.arcstat_l2_writes_sent); 7505 wmsum_fini(&arc_sums.arcstat_l2_writes_done); 7506 wmsum_fini(&arc_sums.arcstat_l2_writes_error); 7507 wmsum_fini(&arc_sums.arcstat_l2_writes_lock_retry); 7508 wmsum_fini(&arc_sums.arcstat_l2_evict_lock_retry); 7509 wmsum_fini(&arc_sums.arcstat_l2_evict_reading); 7510 wmsum_fini(&arc_sums.arcstat_l2_evict_l1cached); 7511 wmsum_fini(&arc_sums.arcstat_l2_free_on_write); 7512 wmsum_fini(&arc_sums.arcstat_l2_abort_lowmem); 7513 wmsum_fini(&arc_sums.arcstat_l2_cksum_bad); 7514 wmsum_fini(&arc_sums.arcstat_l2_io_error); 7515 wmsum_fini(&arc_sums.arcstat_l2_lsize); 7516 wmsum_fini(&arc_sums.arcstat_l2_psize); 7517 aggsum_fini(&arc_sums.arcstat_l2_hdr_size); 7518 wmsum_fini(&arc_sums.arcstat_l2_log_blk_writes); 7519 wmsum_fini(&arc_sums.arcstat_l2_log_blk_asize); 7520 wmsum_fini(&arc_sums.arcstat_l2_log_blk_count); 7521 wmsum_fini(&arc_sums.arcstat_l2_rebuild_success); 7522 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_unsupported); 7523 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_io_errors); 7524 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); 7525 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); 7526 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_lowmem); 7527 wmsum_fini(&arc_sums.arcstat_l2_rebuild_size); 7528 wmsum_fini(&arc_sums.arcstat_l2_rebuild_asize); 7529 wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs); 7530 wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs_precached); 7531 wmsum_fini(&arc_sums.arcstat_l2_rebuild_log_blks); 7532 wmsum_fini(&arc_sums.arcstat_memory_throttle_count); 7533 wmsum_fini(&arc_sums.arcstat_memory_direct_count); 7534 wmsum_fini(&arc_sums.arcstat_memory_indirect_count); 7535 wmsum_fini(&arc_sums.arcstat_prune); 7536 wmsum_fini(&arc_sums.arcstat_meta_used); 7537 wmsum_fini(&arc_sums.arcstat_async_upgrade_sync); 7538 wmsum_fini(&arc_sums.arcstat_predictive_prefetch); 7539 wmsum_fini(&arc_sums.arcstat_demand_hit_predictive_prefetch); 7540 wmsum_fini(&arc_sums.arcstat_demand_iohit_predictive_prefetch); 7541 wmsum_fini(&arc_sums.arcstat_prescient_prefetch); 7542 wmsum_fini(&arc_sums.arcstat_demand_hit_prescient_prefetch); 7543 wmsum_fini(&arc_sums.arcstat_demand_iohit_prescient_prefetch); 7544 wmsum_fini(&arc_sums.arcstat_raw_size); 7545 wmsum_fini(&arc_sums.arcstat_cached_only_in_progress); 7546 wmsum_fini(&arc_sums.arcstat_abd_chunk_waste_size); 7547 } 7548 7549 uint64_t 7550 arc_target_bytes(void) 7551 { 7552 return (arc_c); 7553 } 7554 7555 void 7556 arc_set_limits(uint64_t allmem) 7557 { 7558 /* Set min cache to 1/32 of all memory, or 32MB, whichever is more. */ 7559 arc_c_min = MAX(allmem / 32, 2ULL << SPA_MAXBLOCKSHIFT); 7560 7561 /* How to set default max varies by platform. */ 7562 arc_c_max = arc_default_max(arc_c_min, allmem); 7563 } 7564 void 7565 arc_init(void) 7566 { 7567 uint64_t percent, allmem = arc_all_memory(); 7568 mutex_init(&arc_evict_lock, NULL, MUTEX_DEFAULT, NULL); 7569 list_create(&arc_evict_waiters, sizeof (arc_evict_waiter_t), 7570 offsetof(arc_evict_waiter_t, aew_node)); 7571 7572 arc_min_prefetch_ms = 1000; 7573 arc_min_prescient_prefetch_ms = 6000; 7574 7575 #if defined(_KERNEL) 7576 arc_lowmem_init(); 7577 #endif 7578 7579 arc_set_limits(allmem); 7580 7581 #ifdef _KERNEL 7582 /* 7583 * If zfs_arc_max is non-zero at init, meaning it was set in the kernel 7584 * environment before the module was loaded, don't block setting the 7585 * maximum because it is less than arc_c_min, instead, reset arc_c_min 7586 * to a lower value. 7587 * zfs_arc_min will be handled by arc_tuning_update(). 7588 */ 7589 if (zfs_arc_max != 0 && zfs_arc_max >= MIN_ARC_MAX && 7590 zfs_arc_max < allmem) { 7591 arc_c_max = zfs_arc_max; 7592 if (arc_c_min >= arc_c_max) { 7593 arc_c_min = MAX(zfs_arc_max / 2, 7594 2ULL << SPA_MAXBLOCKSHIFT); 7595 } 7596 } 7597 #else 7598 /* 7599 * In userland, there's only the memory pressure that we artificially 7600 * create (see arc_available_memory()). Don't let arc_c get too 7601 * small, because it can cause transactions to be larger than 7602 * arc_c, causing arc_tempreserve_space() to fail. 7603 */ 7604 arc_c_min = MAX(arc_c_max / 2, 2ULL << SPA_MAXBLOCKSHIFT); 7605 #endif 7606 7607 arc_c = arc_c_min; 7608 /* 7609 * 32-bit fixed point fractions of metadata from total ARC size, 7610 * MRU data from all data and MRU metadata from all metadata. 7611 */ 7612 arc_meta = (1ULL << 32) / 4; /* Metadata is 25% of arc_c. */ 7613 arc_pd = (1ULL << 32) / 2; /* Data MRU is 50% of data. */ 7614 arc_pm = (1ULL << 32) / 2; /* Metadata MRU is 50% of metadata. */ 7615 7616 percent = MIN(zfs_arc_dnode_limit_percent, 100); 7617 arc_dnode_limit = arc_c_max * percent / 100; 7618 7619 /* Apply user specified tunings */ 7620 arc_tuning_update(B_TRUE); 7621 7622 /* if kmem_flags are set, lets try to use less memory */ 7623 if (kmem_debugging()) 7624 arc_c = arc_c / 2; 7625 if (arc_c < arc_c_min) 7626 arc_c = arc_c_min; 7627 7628 arc_register_hotplug(); 7629 7630 arc_state_init(); 7631 7632 buf_init(); 7633 7634 list_create(&arc_prune_list, sizeof (arc_prune_t), 7635 offsetof(arc_prune_t, p_node)); 7636 mutex_init(&arc_prune_mtx, NULL, MUTEX_DEFAULT, NULL); 7637 7638 arc_prune_taskq = taskq_create("arc_prune", zfs_arc_prune_task_threads, 7639 defclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE | TASKQ_DYNAMIC); 7640 7641 arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED, 7642 sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL); 7643 7644 if (arc_ksp != NULL) { 7645 arc_ksp->ks_data = &arc_stats; 7646 arc_ksp->ks_update = arc_kstat_update; 7647 kstat_install(arc_ksp); 7648 } 7649 7650 arc_state_evict_markers = 7651 arc_state_alloc_markers(arc_state_evict_marker_count); 7652 arc_evict_zthr = zthr_create_timer("arc_evict", 7653 arc_evict_cb_check, arc_evict_cb, NULL, SEC2NSEC(1), defclsyspri); 7654 arc_reap_zthr = zthr_create_timer("arc_reap", 7655 arc_reap_cb_check, arc_reap_cb, NULL, SEC2NSEC(1), minclsyspri); 7656 7657 arc_warm = B_FALSE; 7658 7659 /* 7660 * Calculate maximum amount of dirty data per pool. 7661 * 7662 * If it has been set by a module parameter, take that. 7663 * Otherwise, use a percentage of physical memory defined by 7664 * zfs_dirty_data_max_percent (default 10%) with a cap at 7665 * zfs_dirty_data_max_max (default 4G or 25% of physical memory). 7666 */ 7667 #ifdef __LP64__ 7668 if (zfs_dirty_data_max_max == 0) 7669 zfs_dirty_data_max_max = MIN(4ULL * 1024 * 1024 * 1024, 7670 allmem * zfs_dirty_data_max_max_percent / 100); 7671 #else 7672 if (zfs_dirty_data_max_max == 0) 7673 zfs_dirty_data_max_max = MIN(1ULL * 1024 * 1024 * 1024, 7674 allmem * zfs_dirty_data_max_max_percent / 100); 7675 #endif 7676 7677 if (zfs_dirty_data_max == 0) { 7678 zfs_dirty_data_max = allmem * 7679 zfs_dirty_data_max_percent / 100; 7680 zfs_dirty_data_max = MIN(zfs_dirty_data_max, 7681 zfs_dirty_data_max_max); 7682 } 7683 7684 if (zfs_wrlog_data_max == 0) { 7685 7686 /* 7687 * dp_wrlog_total is reduced for each txg at the end of 7688 * spa_sync(). However, dp_dirty_total is reduced every time 7689 * a block is written out. Thus under normal operation, 7690 * dp_wrlog_total could grow 2 times as big as 7691 * zfs_dirty_data_max. 7692 */ 7693 zfs_wrlog_data_max = zfs_dirty_data_max * 2; 7694 } 7695 } 7696 7697 void 7698 arc_fini(void) 7699 { 7700 arc_prune_t *p; 7701 7702 #ifdef _KERNEL 7703 arc_lowmem_fini(); 7704 #endif /* _KERNEL */ 7705 7706 /* Use B_TRUE to ensure *all* buffers are evicted */ 7707 arc_flush(NULL, B_TRUE); 7708 7709 if (arc_ksp != NULL) { 7710 kstat_delete(arc_ksp); 7711 arc_ksp = NULL; 7712 } 7713 7714 taskq_wait(arc_prune_taskq); 7715 taskq_destroy(arc_prune_taskq); 7716 7717 mutex_enter(&arc_prune_mtx); 7718 while ((p = list_remove_head(&arc_prune_list)) != NULL) { 7719 zfs_refcount_remove(&p->p_refcnt, &arc_prune_list); 7720 zfs_refcount_destroy(&p->p_refcnt); 7721 kmem_free(p, sizeof (*p)); 7722 } 7723 mutex_exit(&arc_prune_mtx); 7724 7725 list_destroy(&arc_prune_list); 7726 mutex_destroy(&arc_prune_mtx); 7727 7728 (void) zthr_cancel(arc_evict_zthr); 7729 (void) zthr_cancel(arc_reap_zthr); 7730 arc_state_free_markers(arc_state_evict_markers, 7731 arc_state_evict_marker_count); 7732 7733 mutex_destroy(&arc_evict_lock); 7734 list_destroy(&arc_evict_waiters); 7735 7736 /* 7737 * Free any buffers that were tagged for destruction. This needs 7738 * to occur before arc_state_fini() runs and destroys the aggsum 7739 * values which are updated when freeing scatter ABDs. 7740 */ 7741 l2arc_do_free_on_write(); 7742 7743 /* 7744 * buf_fini() must proceed arc_state_fini() because buf_fin() may 7745 * trigger the release of kmem magazines, which can callback to 7746 * arc_space_return() which accesses aggsums freed in act_state_fini(). 7747 */ 7748 buf_fini(); 7749 arc_state_fini(); 7750 7751 arc_unregister_hotplug(); 7752 7753 /* 7754 * We destroy the zthrs after all the ARC state has been 7755 * torn down to avoid the case of them receiving any 7756 * wakeup() signals after they are destroyed. 7757 */ 7758 zthr_destroy(arc_evict_zthr); 7759 zthr_destroy(arc_reap_zthr); 7760 7761 ASSERT0(arc_loaned_bytes); 7762 } 7763 7764 /* 7765 * Level 2 ARC 7766 * 7767 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk. 7768 * It uses dedicated storage devices to hold cached data, which are populated 7769 * using large infrequent writes. The main role of this cache is to boost 7770 * the performance of random read workloads. The intended L2ARC devices 7771 * include short-stroked disks, solid state disks, and other media with 7772 * substantially faster read latency than disk. 7773 * 7774 * +-----------------------+ 7775 * | ARC | 7776 * +-----------------------+ 7777 * | ^ ^ 7778 * | | | 7779 * l2arc_feed_thread() arc_read() 7780 * | | | 7781 * | l2arc read | 7782 * V | | 7783 * +---------------+ | 7784 * | L2ARC | | 7785 * +---------------+ | 7786 * | ^ | 7787 * l2arc_write() | | 7788 * | | | 7789 * V | | 7790 * +-------+ +-------+ 7791 * | vdev | | vdev | 7792 * | cache | | cache | 7793 * +-------+ +-------+ 7794 * +=========+ .-----. 7795 * : L2ARC : |-_____-| 7796 * : devices : | Disks | 7797 * +=========+ `-_____-' 7798 * 7799 * Read requests are satisfied from the following sources, in order: 7800 * 7801 * 1) ARC 7802 * 2) vdev cache of L2ARC devices 7803 * 3) L2ARC devices 7804 * 4) vdev cache of disks 7805 * 5) disks 7806 * 7807 * Some L2ARC device types exhibit extremely slow write performance. 7808 * To accommodate for this there are some significant differences between 7809 * the L2ARC and traditional cache design: 7810 * 7811 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from 7812 * the ARC behave as usual, freeing buffers and placing headers on ghost 7813 * lists. The ARC does not send buffers to the L2ARC during eviction as 7814 * this would add inflated write latencies for all ARC memory pressure. 7815 * 7816 * 2. The L2ARC attempts to cache data from the ARC before it is evicted. 7817 * It does this by periodically scanning buffers from the eviction-end of 7818 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are 7819 * not already there. It scans until a headroom of buffers is satisfied, 7820 * which itself is a buffer for ARC eviction. If a compressible buffer is 7821 * found during scanning and selected for writing to an L2ARC device, we 7822 * temporarily boost scanning headroom during the next scan cycle to make 7823 * sure we adapt to compression effects (which might significantly reduce 7824 * the data volume we write to L2ARC). The thread that does this is 7825 * l2arc_feed_thread(), illustrated below; example sizes are included to 7826 * provide a better sense of ratio than this diagram: 7827 * 7828 * head --> tail 7829 * +---------------------+----------+ 7830 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC 7831 * +---------------------+----------+ | o L2ARC eligible 7832 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer 7833 * +---------------------+----------+ | 7834 * 15.9 Gbytes ^ 32 Mbytes | 7835 * headroom | 7836 * l2arc_feed_thread() 7837 * | 7838 * l2arc write hand <--[oooo]--' 7839 * | 8 Mbyte 7840 * | write max 7841 * V 7842 * +==============================+ 7843 * L2ARC dev |####|#|###|###| |####| ... | 7844 * +==============================+ 7845 * 32 Gbytes 7846 * 7847 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of 7848 * evicted, then the L2ARC has cached a buffer much sooner than it probably 7849 * needed to, potentially wasting L2ARC device bandwidth and storage. It is 7850 * safe to say that this is an uncommon case, since buffers at the end of 7851 * the ARC lists have moved there due to inactivity. 7852 * 7853 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom, 7854 * then the L2ARC simply misses copying some buffers. This serves as a 7855 * pressure valve to prevent heavy read workloads from both stalling the ARC 7856 * with waits and clogging the L2ARC with writes. This also helps prevent 7857 * the potential for the L2ARC to churn if it attempts to cache content too 7858 * quickly, such as during backups of the entire pool. 7859 * 7860 * 5. After system boot and before the ARC has filled main memory, there are 7861 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru 7862 * lists can remain mostly static. Instead of searching from tail of these 7863 * lists as pictured, the l2arc_feed_thread() will search from the list heads 7864 * for eligible buffers, greatly increasing its chance of finding them. 7865 * 7866 * The L2ARC device write speed is also boosted during this time so that 7867 * the L2ARC warms up faster. Since there have been no ARC evictions yet, 7868 * there are no L2ARC reads, and no fear of degrading read performance 7869 * through increased writes. 7870 * 7871 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that 7872 * the vdev queue can aggregate them into larger and fewer writes. Each 7873 * device is written to in a rotor fashion, sweeping writes through 7874 * available space then repeating. 7875 * 7876 * 7. The L2ARC does not store dirty content. It never needs to flush 7877 * write buffers back to disk based storage. 7878 * 7879 * 8. If an ARC buffer is written (and dirtied) which also exists in the 7880 * L2ARC, the now stale L2ARC buffer is immediately dropped. 7881 * 7882 * The performance of the L2ARC can be tweaked by a number of tunables, which 7883 * may be necessary for different workloads: 7884 * 7885 * l2arc_write_max max write bytes per interval 7886 * l2arc_write_boost extra write bytes during device warmup 7887 * l2arc_noprefetch skip caching prefetched buffers 7888 * l2arc_headroom number of max device writes to precache 7889 * l2arc_headroom_boost when we find compressed buffers during ARC 7890 * scanning, we multiply headroom by this 7891 * percentage factor for the next scan cycle, 7892 * since more compressed buffers are likely to 7893 * be present 7894 * l2arc_feed_secs seconds between L2ARC writing 7895 * 7896 * Tunables may be removed or added as future performance improvements are 7897 * integrated, and also may become zpool properties. 7898 * 7899 * There are three key functions that control how the L2ARC warms up: 7900 * 7901 * l2arc_write_eligible() check if a buffer is eligible to cache 7902 * l2arc_write_size() calculate how much to write 7903 * l2arc_write_interval() calculate sleep delay between writes 7904 * 7905 * These three functions determine what to write, how much, and how quickly 7906 * to send writes. 7907 * 7908 * L2ARC persistence: 7909 * 7910 * When writing buffers to L2ARC, we periodically add some metadata to 7911 * make sure we can pick them up after reboot, thus dramatically reducing 7912 * the impact that any downtime has on the performance of storage systems 7913 * with large caches. 7914 * 7915 * The implementation works fairly simply by integrating the following two 7916 * modifications: 7917 * 7918 * *) When writing to the L2ARC, we occasionally write a "l2arc log block", 7919 * which is an additional piece of metadata which describes what's been 7920 * written. This allows us to rebuild the arc_buf_hdr_t structures of the 7921 * main ARC buffers. There are 2 linked-lists of log blocks headed by 7922 * dh_start_lbps[2]. We alternate which chain we append to, so they are 7923 * time-wise and offset-wise interleaved, but that is an optimization rather 7924 * than for correctness. The log block also includes a pointer to the 7925 * previous block in its chain. 7926 * 7927 * *) We reserve SPA_MINBLOCKSIZE of space at the start of each L2ARC device 7928 * for our header bookkeeping purposes. This contains a device header, 7929 * which contains our top-level reference structures. We update it each 7930 * time we write a new log block, so that we're able to locate it in the 7931 * L2ARC device. If this write results in an inconsistent device header 7932 * (e.g. due to power failure), we detect this by verifying the header's 7933 * checksum and simply fail to reconstruct the L2ARC after reboot. 7934 * 7935 * Implementation diagram: 7936 * 7937 * +=== L2ARC device (not to scale) ======================================+ 7938 * | ___two newest log block pointers__.__________ | 7939 * | / \dh_start_lbps[1] | 7940 * | / \ \dh_start_lbps[0]| 7941 * |.___/__. V V | 7942 * ||L2 dev|....|lb |bufs |lb |bufs |lb |bufs |lb |bufs |lb |---(empty)---| 7943 * || hdr| ^ /^ /^ / / | 7944 * |+------+ ...--\-------/ \-----/--\------/ / | 7945 * | \--------------/ \--------------/ | 7946 * +======================================================================+ 7947 * 7948 * As can be seen on the diagram, rather than using a simple linked list, 7949 * we use a pair of linked lists with alternating elements. This is a 7950 * performance enhancement due to the fact that we only find out the 7951 * address of the next log block access once the current block has been 7952 * completely read in. Obviously, this hurts performance, because we'd be 7953 * keeping the device's I/O queue at only a 1 operation deep, thus 7954 * incurring a large amount of I/O round-trip latency. Having two lists 7955 * allows us to fetch two log blocks ahead of where we are currently 7956 * rebuilding L2ARC buffers. 7957 * 7958 * On-device data structures: 7959 * 7960 * L2ARC device header: l2arc_dev_hdr_phys_t 7961 * L2ARC log block: l2arc_log_blk_phys_t 7962 * 7963 * L2ARC reconstruction: 7964 * 7965 * When writing data, we simply write in the standard rotary fashion, 7966 * evicting buffers as we go and simply writing new data over them (writing 7967 * a new log block every now and then). This obviously means that once we 7968 * loop around the end of the device, we will start cutting into an already 7969 * committed log block (and its referenced data buffers), like so: 7970 * 7971 * current write head__ __old tail 7972 * \ / 7973 * V V 7974 * <--|bufs |lb |bufs |lb | |bufs |lb |bufs |lb |--> 7975 * ^ ^^^^^^^^^___________________________________ 7976 * | \ 7977 * <<nextwrite>> may overwrite this blk and/or its bufs --' 7978 * 7979 * When importing the pool, we detect this situation and use it to stop 7980 * our scanning process (see l2arc_rebuild). 7981 * 7982 * There is one significant caveat to consider when rebuilding ARC contents 7983 * from an L2ARC device: what about invalidated buffers? Given the above 7984 * construction, we cannot update blocks which we've already written to amend 7985 * them to remove buffers which were invalidated. Thus, during reconstruction, 7986 * we might be populating the cache with buffers for data that's not on the 7987 * main pool anymore, or may have been overwritten! 7988 * 7989 * As it turns out, this isn't a problem. Every arc_read request includes 7990 * both the DVA and, crucially, the birth TXG of the BP the caller is 7991 * looking for. So even if the cache were populated by completely rotten 7992 * blocks for data that had been long deleted and/or overwritten, we'll 7993 * never actually return bad data from the cache, since the DVA with the 7994 * birth TXG uniquely identify a block in space and time - once created, 7995 * a block is immutable on disk. The worst thing we have done is wasted 7996 * some time and memory at l2arc rebuild to reconstruct outdated ARC 7997 * entries that will get dropped from the l2arc as it is being updated 7998 * with new blocks. 7999 * 8000 * L2ARC buffers that have been evicted by l2arc_evict() ahead of the write 8001 * hand are not restored. This is done by saving the offset (in bytes) 8002 * l2arc_evict() has evicted to in the L2ARC device header and taking it 8003 * into account when restoring buffers. 8004 */ 8005 8006 static boolean_t 8007 l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr) 8008 { 8009 /* 8010 * A buffer is *not* eligible for the L2ARC if it: 8011 * 1. belongs to a different spa. 8012 * 2. is already cached on the L2ARC. 8013 * 3. has an I/O in progress (it may be an incomplete read). 8014 * 4. is flagged not eligible (zfs property). 8015 */ 8016 if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) || 8017 HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr)) 8018 return (B_FALSE); 8019 8020 return (B_TRUE); 8021 } 8022 8023 static uint64_t 8024 l2arc_write_size(l2arc_dev_t *dev) 8025 { 8026 uint64_t size; 8027 8028 /* 8029 * Make sure our globals have meaningful values in case the user 8030 * altered them. 8031 */ 8032 size = l2arc_write_max; 8033 if (size == 0) { 8034 cmn_err(CE_NOTE, "Bad value for l2arc_write_max, value must " 8035 "be greater than zero, resetting it to the default (%d)", 8036 L2ARC_WRITE_SIZE); 8037 size = l2arc_write_max = L2ARC_WRITE_SIZE; 8038 } 8039 8040 if (arc_warm == B_FALSE) 8041 size += l2arc_write_boost; 8042 8043 /* We need to add in the worst case scenario of log block overhead. */ 8044 size += l2arc_log_blk_overhead(size, dev); 8045 if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) { 8046 /* 8047 * Trim ahead of the write size 64MB or (l2arc_trim_ahead/100) 8048 * times the writesize, whichever is greater. 8049 */ 8050 size += MAX(64 * 1024 * 1024, 8051 (size * l2arc_trim_ahead) / 100); 8052 } 8053 8054 /* 8055 * Make sure the write size does not exceed the size of the cache 8056 * device. This is important in l2arc_evict(), otherwise infinite 8057 * iteration can occur. 8058 */ 8059 if (size > dev->l2ad_end - dev->l2ad_start) { 8060 cmn_err(CE_NOTE, "l2arc_write_max or l2arc_write_boost " 8061 "plus the overhead of log blocks (persistent L2ARC, " 8062 "%llu bytes) exceeds the size of the cache device " 8063 "(guid %llu), resetting them to the default (%d)", 8064 (u_longlong_t)l2arc_log_blk_overhead(size, dev), 8065 (u_longlong_t)dev->l2ad_vdev->vdev_guid, L2ARC_WRITE_SIZE); 8066 8067 size = l2arc_write_max = l2arc_write_boost = L2ARC_WRITE_SIZE; 8068 8069 if (l2arc_trim_ahead > 1) { 8070 cmn_err(CE_NOTE, "l2arc_trim_ahead set to 1"); 8071 l2arc_trim_ahead = 1; 8072 } 8073 8074 if (arc_warm == B_FALSE) 8075 size += l2arc_write_boost; 8076 8077 size += l2arc_log_blk_overhead(size, dev); 8078 if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) { 8079 size += MAX(64 * 1024 * 1024, 8080 (size * l2arc_trim_ahead) / 100); 8081 } 8082 } 8083 8084 return (size); 8085 8086 } 8087 8088 static clock_t 8089 l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote) 8090 { 8091 clock_t interval, next, now; 8092 8093 /* 8094 * If the ARC lists are busy, increase our write rate; if the 8095 * lists are stale, idle back. This is achieved by checking 8096 * how much we previously wrote - if it was more than half of 8097 * what we wanted, schedule the next write much sooner. 8098 */ 8099 if (l2arc_feed_again && wrote > (wanted / 2)) 8100 interval = (hz * l2arc_feed_min_ms) / 1000; 8101 else 8102 interval = hz * l2arc_feed_secs; 8103 8104 now = ddi_get_lbolt(); 8105 next = MAX(now, MIN(now + interval, began + interval)); 8106 8107 return (next); 8108 } 8109 8110 /* 8111 * Cycle through L2ARC devices. This is how L2ARC load balances. 8112 * If a device is returned, this also returns holding the spa config lock. 8113 */ 8114 static l2arc_dev_t * 8115 l2arc_dev_get_next(void) 8116 { 8117 l2arc_dev_t *first, *next = NULL; 8118 8119 /* 8120 * Lock out the removal of spas (spa_namespace_lock), then removal 8121 * of cache devices (l2arc_dev_mtx). Once a device has been selected, 8122 * both locks will be dropped and a spa config lock held instead. 8123 */ 8124 mutex_enter(&spa_namespace_lock); 8125 mutex_enter(&l2arc_dev_mtx); 8126 8127 /* if there are no vdevs, there is nothing to do */ 8128 if (l2arc_ndev == 0) 8129 goto out; 8130 8131 first = NULL; 8132 next = l2arc_dev_last; 8133 do { 8134 /* loop around the list looking for a non-faulted vdev */ 8135 if (next == NULL) { 8136 next = list_head(l2arc_dev_list); 8137 } else { 8138 next = list_next(l2arc_dev_list, next); 8139 if (next == NULL) 8140 next = list_head(l2arc_dev_list); 8141 } 8142 8143 /* if we have come back to the start, bail out */ 8144 if (first == NULL) 8145 first = next; 8146 else if (next == first) 8147 break; 8148 8149 ASSERT3P(next, !=, NULL); 8150 } while (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8151 next->l2ad_trim_all); 8152 8153 /* if we were unable to find any usable vdevs, return NULL */ 8154 if (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8155 next->l2ad_trim_all) 8156 next = NULL; 8157 8158 l2arc_dev_last = next; 8159 8160 out: 8161 mutex_exit(&l2arc_dev_mtx); 8162 8163 /* 8164 * Grab the config lock to prevent the 'next' device from being 8165 * removed while we are writing to it. 8166 */ 8167 if (next != NULL) 8168 spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER); 8169 mutex_exit(&spa_namespace_lock); 8170 8171 return (next); 8172 } 8173 8174 /* 8175 * Free buffers that were tagged for destruction. 8176 */ 8177 static void 8178 l2arc_do_free_on_write(void) 8179 { 8180 l2arc_data_free_t *df; 8181 8182 mutex_enter(&l2arc_free_on_write_mtx); 8183 while ((df = list_remove_head(l2arc_free_on_write)) != NULL) { 8184 ASSERT3P(df->l2df_abd, !=, NULL); 8185 abd_free(df->l2df_abd); 8186 kmem_free(df, sizeof (l2arc_data_free_t)); 8187 } 8188 mutex_exit(&l2arc_free_on_write_mtx); 8189 } 8190 8191 /* 8192 * A write to a cache device has completed. Update all headers to allow 8193 * reads from these buffers to begin. 8194 */ 8195 static void 8196 l2arc_write_done(zio_t *zio) 8197 { 8198 l2arc_write_callback_t *cb; 8199 l2arc_lb_abd_buf_t *abd_buf; 8200 l2arc_lb_ptr_buf_t *lb_ptr_buf; 8201 l2arc_dev_t *dev; 8202 l2arc_dev_hdr_phys_t *l2dhdr; 8203 list_t *buflist; 8204 arc_buf_hdr_t *head, *hdr, *hdr_prev; 8205 kmutex_t *hash_lock; 8206 int64_t bytes_dropped = 0; 8207 8208 cb = zio->io_private; 8209 ASSERT3P(cb, !=, NULL); 8210 dev = cb->l2wcb_dev; 8211 l2dhdr = dev->l2ad_dev_hdr; 8212 ASSERT3P(dev, !=, NULL); 8213 head = cb->l2wcb_head; 8214 ASSERT3P(head, !=, NULL); 8215 buflist = &dev->l2ad_buflist; 8216 ASSERT3P(buflist, !=, NULL); 8217 DTRACE_PROBE2(l2arc__iodone, zio_t *, zio, 8218 l2arc_write_callback_t *, cb); 8219 8220 /* 8221 * All writes completed, or an error was hit. 8222 */ 8223 top: 8224 mutex_enter(&dev->l2ad_mtx); 8225 for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) { 8226 hdr_prev = list_prev(buflist, hdr); 8227 8228 hash_lock = HDR_LOCK(hdr); 8229 8230 /* 8231 * We cannot use mutex_enter or else we can deadlock 8232 * with l2arc_write_buffers (due to swapping the order 8233 * the hash lock and l2ad_mtx are taken). 8234 */ 8235 if (!mutex_tryenter(hash_lock)) { 8236 /* 8237 * Missed the hash lock. We must retry so we 8238 * don't leave the ARC_FLAG_L2_WRITING bit set. 8239 */ 8240 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry); 8241 8242 /* 8243 * We don't want to rescan the headers we've 8244 * already marked as having been written out, so 8245 * we reinsert the head node so we can pick up 8246 * where we left off. 8247 */ 8248 list_remove(buflist, head); 8249 list_insert_after(buflist, hdr, head); 8250 8251 mutex_exit(&dev->l2ad_mtx); 8252 8253 /* 8254 * We wait for the hash lock to become available 8255 * to try and prevent busy waiting, and increase 8256 * the chance we'll be able to acquire the lock 8257 * the next time around. 8258 */ 8259 mutex_enter(hash_lock); 8260 mutex_exit(hash_lock); 8261 goto top; 8262 } 8263 8264 /* 8265 * We could not have been moved into the arc_l2c_only 8266 * state while in-flight due to our ARC_FLAG_L2_WRITING 8267 * bit being set. Let's just ensure that's being enforced. 8268 */ 8269 ASSERT(HDR_HAS_L1HDR(hdr)); 8270 8271 /* 8272 * Skipped - drop L2ARC entry and mark the header as no 8273 * longer L2 eligibile. 8274 */ 8275 if (zio->io_error != 0) { 8276 /* 8277 * Error - drop L2ARC entry. 8278 */ 8279 list_remove(buflist, hdr); 8280 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 8281 8282 uint64_t psize = HDR_GET_PSIZE(hdr); 8283 l2arc_hdr_arcstats_decrement(hdr); 8284 8285 bytes_dropped += 8286 vdev_psize_to_asize(dev->l2ad_vdev, psize); 8287 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 8288 arc_hdr_size(hdr), hdr); 8289 } 8290 8291 /* 8292 * Allow ARC to begin reads and ghost list evictions to 8293 * this L2ARC entry. 8294 */ 8295 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING); 8296 8297 mutex_exit(hash_lock); 8298 } 8299 8300 /* 8301 * Free the allocated abd buffers for writing the log blocks. 8302 * If the zio failed reclaim the allocated space and remove the 8303 * pointers to these log blocks from the log block pointer list 8304 * of the L2ARC device. 8305 */ 8306 while ((abd_buf = list_remove_tail(&cb->l2wcb_abd_list)) != NULL) { 8307 abd_free(abd_buf->abd); 8308 zio_buf_free(abd_buf, sizeof (*abd_buf)); 8309 if (zio->io_error != 0) { 8310 lb_ptr_buf = list_remove_head(&dev->l2ad_lbptr_list); 8311 /* 8312 * L2BLK_GET_PSIZE returns aligned size for log 8313 * blocks. 8314 */ 8315 uint64_t asize = 8316 L2BLK_GET_PSIZE((lb_ptr_buf->lb_ptr)->lbp_prop); 8317 bytes_dropped += asize; 8318 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8319 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8320 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8321 lb_ptr_buf); 8322 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8323 kmem_free(lb_ptr_buf->lb_ptr, 8324 sizeof (l2arc_log_blkptr_t)); 8325 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8326 } 8327 } 8328 list_destroy(&cb->l2wcb_abd_list); 8329 8330 if (zio->io_error != 0) { 8331 ARCSTAT_BUMP(arcstat_l2_writes_error); 8332 8333 /* 8334 * Restore the lbps array in the header to its previous state. 8335 * If the list of log block pointers is empty, zero out the 8336 * log block pointers in the device header. 8337 */ 8338 lb_ptr_buf = list_head(&dev->l2ad_lbptr_list); 8339 for (int i = 0; i < 2; i++) { 8340 if (lb_ptr_buf == NULL) { 8341 /* 8342 * If the list is empty zero out the device 8343 * header. Otherwise zero out the second log 8344 * block pointer in the header. 8345 */ 8346 if (i == 0) { 8347 memset(l2dhdr, 0, 8348 dev->l2ad_dev_hdr_asize); 8349 } else { 8350 memset(&l2dhdr->dh_start_lbps[i], 0, 8351 sizeof (l2arc_log_blkptr_t)); 8352 } 8353 break; 8354 } 8355 memcpy(&l2dhdr->dh_start_lbps[i], lb_ptr_buf->lb_ptr, 8356 sizeof (l2arc_log_blkptr_t)); 8357 lb_ptr_buf = list_next(&dev->l2ad_lbptr_list, 8358 lb_ptr_buf); 8359 } 8360 } 8361 8362 ARCSTAT_BUMP(arcstat_l2_writes_done); 8363 list_remove(buflist, head); 8364 ASSERT(!HDR_HAS_L1HDR(head)); 8365 kmem_cache_free(hdr_l2only_cache, head); 8366 mutex_exit(&dev->l2ad_mtx); 8367 8368 ASSERT(dev->l2ad_vdev != NULL); 8369 vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0); 8370 8371 l2arc_do_free_on_write(); 8372 8373 kmem_free(cb, sizeof (l2arc_write_callback_t)); 8374 } 8375 8376 static int 8377 l2arc_untransform(zio_t *zio, l2arc_read_callback_t *cb) 8378 { 8379 int ret; 8380 spa_t *spa = zio->io_spa; 8381 arc_buf_hdr_t *hdr = cb->l2rcb_hdr; 8382 blkptr_t *bp = zio->io_bp; 8383 uint8_t salt[ZIO_DATA_SALT_LEN]; 8384 uint8_t iv[ZIO_DATA_IV_LEN]; 8385 uint8_t mac[ZIO_DATA_MAC_LEN]; 8386 boolean_t no_crypt = B_FALSE; 8387 8388 /* 8389 * ZIL data is never be written to the L2ARC, so we don't need 8390 * special handling for its unique MAC storage. 8391 */ 8392 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 8393 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 8394 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8395 8396 /* 8397 * If the data was encrypted, decrypt it now. Note that 8398 * we must check the bp here and not the hdr, since the 8399 * hdr does not have its encryption parameters updated 8400 * until arc_read_done(). 8401 */ 8402 if (BP_IS_ENCRYPTED(bp)) { 8403 abd_t *eabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8404 ARC_HDR_USE_RESERVE); 8405 8406 zio_crypt_decode_params_bp(bp, salt, iv); 8407 zio_crypt_decode_mac_bp(bp, mac); 8408 8409 ret = spa_do_crypt_abd(B_FALSE, spa, &cb->l2rcb_zb, 8410 BP_GET_TYPE(bp), BP_GET_DEDUP(bp), BP_SHOULD_BYTESWAP(bp), 8411 salt, iv, mac, HDR_GET_PSIZE(hdr), eabd, 8412 hdr->b_l1hdr.b_pabd, &no_crypt); 8413 if (ret != 0) { 8414 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8415 goto error; 8416 } 8417 8418 /* 8419 * If we actually performed decryption, replace b_pabd 8420 * with the decrypted data. Otherwise we can just throw 8421 * our decryption buffer away. 8422 */ 8423 if (!no_crypt) { 8424 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8425 arc_hdr_size(hdr), hdr); 8426 hdr->b_l1hdr.b_pabd = eabd; 8427 zio->io_abd = eabd; 8428 } else { 8429 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8430 } 8431 } 8432 8433 /* 8434 * If the L2ARC block was compressed, but ARC compression 8435 * is disabled we decompress the data into a new buffer and 8436 * replace the existing data. 8437 */ 8438 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 8439 !HDR_COMPRESSION_ENABLED(hdr)) { 8440 abd_t *cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8441 ARC_HDR_USE_RESERVE); 8442 void *tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 8443 8444 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 8445 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 8446 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 8447 if (ret != 0) { 8448 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8449 arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr); 8450 goto error; 8451 } 8452 8453 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8454 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8455 arc_hdr_size(hdr), hdr); 8456 hdr->b_l1hdr.b_pabd = cabd; 8457 zio->io_abd = cabd; 8458 zio->io_size = HDR_GET_LSIZE(hdr); 8459 } 8460 8461 return (0); 8462 8463 error: 8464 return (ret); 8465 } 8466 8467 8468 /* 8469 * A read to a cache device completed. Validate buffer contents before 8470 * handing over to the regular ARC routines. 8471 */ 8472 static void 8473 l2arc_read_done(zio_t *zio) 8474 { 8475 int tfm_error = 0; 8476 l2arc_read_callback_t *cb = zio->io_private; 8477 arc_buf_hdr_t *hdr; 8478 kmutex_t *hash_lock; 8479 boolean_t valid_cksum; 8480 boolean_t using_rdata = (BP_IS_ENCRYPTED(&cb->l2rcb_bp) && 8481 (cb->l2rcb_flags & ZIO_FLAG_RAW_ENCRYPT)); 8482 8483 ASSERT3P(zio->io_vd, !=, NULL); 8484 ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE); 8485 8486 spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd); 8487 8488 ASSERT3P(cb, !=, NULL); 8489 hdr = cb->l2rcb_hdr; 8490 ASSERT3P(hdr, !=, NULL); 8491 8492 hash_lock = HDR_LOCK(hdr); 8493 mutex_enter(hash_lock); 8494 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 8495 8496 /* 8497 * If the data was read into a temporary buffer, 8498 * move it and free the buffer. 8499 */ 8500 if (cb->l2rcb_abd != NULL) { 8501 ASSERT3U(arc_hdr_size(hdr), <, zio->io_size); 8502 if (zio->io_error == 0) { 8503 if (using_rdata) { 8504 abd_copy(hdr->b_crypt_hdr.b_rabd, 8505 cb->l2rcb_abd, arc_hdr_size(hdr)); 8506 } else { 8507 abd_copy(hdr->b_l1hdr.b_pabd, 8508 cb->l2rcb_abd, arc_hdr_size(hdr)); 8509 } 8510 } 8511 8512 /* 8513 * The following must be done regardless of whether 8514 * there was an error: 8515 * - free the temporary buffer 8516 * - point zio to the real ARC buffer 8517 * - set zio size accordingly 8518 * These are required because zio is either re-used for 8519 * an I/O of the block in the case of the error 8520 * or the zio is passed to arc_read_done() and it 8521 * needs real data. 8522 */ 8523 abd_free(cb->l2rcb_abd); 8524 zio->io_size = zio->io_orig_size = arc_hdr_size(hdr); 8525 8526 if (using_rdata) { 8527 ASSERT(HDR_HAS_RABD(hdr)); 8528 zio->io_abd = zio->io_orig_abd = 8529 hdr->b_crypt_hdr.b_rabd; 8530 } else { 8531 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8532 zio->io_abd = zio->io_orig_abd = hdr->b_l1hdr.b_pabd; 8533 } 8534 } 8535 8536 ASSERT3P(zio->io_abd, !=, NULL); 8537 8538 /* 8539 * Check this survived the L2ARC journey. 8540 */ 8541 ASSERT(zio->io_abd == hdr->b_l1hdr.b_pabd || 8542 (HDR_HAS_RABD(hdr) && zio->io_abd == hdr->b_crypt_hdr.b_rabd)); 8543 zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */ 8544 zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */ 8545 zio->io_prop.zp_complevel = hdr->b_complevel; 8546 8547 valid_cksum = arc_cksum_is_equal(hdr, zio); 8548 8549 /* 8550 * b_rabd will always match the data as it exists on disk if it is 8551 * being used. Therefore if we are reading into b_rabd we do not 8552 * attempt to untransform the data. 8553 */ 8554 if (valid_cksum && !using_rdata) 8555 tfm_error = l2arc_untransform(zio, cb); 8556 8557 if (valid_cksum && tfm_error == 0 && zio->io_error == 0 && 8558 !HDR_L2_EVICTED(hdr)) { 8559 mutex_exit(hash_lock); 8560 zio->io_private = hdr; 8561 arc_read_done(zio); 8562 } else { 8563 /* 8564 * Buffer didn't survive caching. Increment stats and 8565 * reissue to the original storage device. 8566 */ 8567 if (zio->io_error != 0) { 8568 ARCSTAT_BUMP(arcstat_l2_io_error); 8569 } else { 8570 zio->io_error = SET_ERROR(EIO); 8571 } 8572 if (!valid_cksum || tfm_error != 0) 8573 ARCSTAT_BUMP(arcstat_l2_cksum_bad); 8574 8575 /* 8576 * If there's no waiter, issue an async i/o to the primary 8577 * storage now. If there *is* a waiter, the caller must 8578 * issue the i/o in a context where it's OK to block. 8579 */ 8580 if (zio->io_waiter == NULL) { 8581 zio_t *pio = zio_unique_parent(zio); 8582 void *abd = (using_rdata) ? 8583 hdr->b_crypt_hdr.b_rabd : hdr->b_l1hdr.b_pabd; 8584 8585 ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL); 8586 8587 zio = zio_read(pio, zio->io_spa, zio->io_bp, 8588 abd, zio->io_size, arc_read_done, 8589 hdr, zio->io_priority, cb->l2rcb_flags, 8590 &cb->l2rcb_zb); 8591 8592 /* 8593 * Original ZIO will be freed, so we need to update 8594 * ARC header with the new ZIO pointer to be used 8595 * by zio_change_priority() in arc_read(). 8596 */ 8597 for (struct arc_callback *acb = hdr->b_l1hdr.b_acb; 8598 acb != NULL; acb = acb->acb_next) 8599 acb->acb_zio_head = zio; 8600 8601 mutex_exit(hash_lock); 8602 zio_nowait(zio); 8603 } else { 8604 mutex_exit(hash_lock); 8605 } 8606 } 8607 8608 kmem_free(cb, sizeof (l2arc_read_callback_t)); 8609 } 8610 8611 /* 8612 * This is the list priority from which the L2ARC will search for pages to 8613 * cache. This is used within loops (0..3) to cycle through lists in the 8614 * desired order. This order can have a significant effect on cache 8615 * performance. 8616 * 8617 * Currently the metadata lists are hit first, MFU then MRU, followed by 8618 * the data lists. This function returns a locked list, and also returns 8619 * the lock pointer. 8620 */ 8621 static multilist_sublist_t * 8622 l2arc_sublist_lock(int list_num) 8623 { 8624 multilist_t *ml = NULL; 8625 unsigned int idx; 8626 8627 ASSERT(list_num >= 0 && list_num < L2ARC_FEED_TYPES); 8628 8629 switch (list_num) { 8630 case 0: 8631 ml = &arc_mfu->arcs_list[ARC_BUFC_METADATA]; 8632 break; 8633 case 1: 8634 ml = &arc_mru->arcs_list[ARC_BUFC_METADATA]; 8635 break; 8636 case 2: 8637 ml = &arc_mfu->arcs_list[ARC_BUFC_DATA]; 8638 break; 8639 case 3: 8640 ml = &arc_mru->arcs_list[ARC_BUFC_DATA]; 8641 break; 8642 default: 8643 return (NULL); 8644 } 8645 8646 /* 8647 * Return a randomly-selected sublist. This is acceptable 8648 * because the caller feeds only a little bit of data for each 8649 * call (8MB). Subsequent calls will result in different 8650 * sublists being selected. 8651 */ 8652 idx = multilist_get_random_index(ml); 8653 return (multilist_sublist_lock(ml, idx)); 8654 } 8655 8656 /* 8657 * Calculates the maximum overhead of L2ARC metadata log blocks for a given 8658 * L2ARC write size. l2arc_evict and l2arc_write_size need to include this 8659 * overhead in processing to make sure there is enough headroom available 8660 * when writing buffers. 8661 */ 8662 static inline uint64_t 8663 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev) 8664 { 8665 if (dev->l2ad_log_entries == 0) { 8666 return (0); 8667 } else { 8668 uint64_t log_entries = write_sz >> SPA_MINBLOCKSHIFT; 8669 8670 uint64_t log_blocks = (log_entries + 8671 dev->l2ad_log_entries - 1) / 8672 dev->l2ad_log_entries; 8673 8674 return (vdev_psize_to_asize(dev->l2ad_vdev, 8675 sizeof (l2arc_log_blk_phys_t)) * log_blocks); 8676 } 8677 } 8678 8679 /* 8680 * Evict buffers from the device write hand to the distance specified in 8681 * bytes. This distance may span populated buffers, it may span nothing. 8682 * This is clearing a region on the L2ARC device ready for writing. 8683 * If the 'all' boolean is set, every buffer is evicted. 8684 */ 8685 static void 8686 l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all) 8687 { 8688 list_t *buflist; 8689 arc_buf_hdr_t *hdr, *hdr_prev; 8690 kmutex_t *hash_lock; 8691 uint64_t taddr; 8692 l2arc_lb_ptr_buf_t *lb_ptr_buf, *lb_ptr_buf_prev; 8693 vdev_t *vd = dev->l2ad_vdev; 8694 boolean_t rerun; 8695 8696 buflist = &dev->l2ad_buflist; 8697 8698 top: 8699 rerun = B_FALSE; 8700 if (dev->l2ad_hand + distance > dev->l2ad_end) { 8701 /* 8702 * When there is no space to accommodate upcoming writes, 8703 * evict to the end. Then bump the write and evict hands 8704 * to the start and iterate. This iteration does not 8705 * happen indefinitely as we make sure in 8706 * l2arc_write_size() that when the write hand is reset, 8707 * the write size does not exceed the end of the device. 8708 */ 8709 rerun = B_TRUE; 8710 taddr = dev->l2ad_end; 8711 } else { 8712 taddr = dev->l2ad_hand + distance; 8713 } 8714 DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist, 8715 uint64_t, taddr, boolean_t, all); 8716 8717 if (!all) { 8718 /* 8719 * This check has to be placed after deciding whether to 8720 * iterate (rerun). 8721 */ 8722 if (dev->l2ad_first) { 8723 /* 8724 * This is the first sweep through the device. There is 8725 * nothing to evict. We have already trimmmed the 8726 * whole device. 8727 */ 8728 goto out; 8729 } else { 8730 /* 8731 * Trim the space to be evicted. 8732 */ 8733 if (vd->vdev_has_trim && dev->l2ad_evict < taddr && 8734 l2arc_trim_ahead > 0) { 8735 /* 8736 * We have to drop the spa_config lock because 8737 * vdev_trim_range() will acquire it. 8738 * l2ad_evict already accounts for the label 8739 * size. To prevent vdev_trim_ranges() from 8740 * adding it again, we subtract it from 8741 * l2ad_evict. 8742 */ 8743 spa_config_exit(dev->l2ad_spa, SCL_L2ARC, dev); 8744 vdev_trim_simple(vd, 8745 dev->l2ad_evict - VDEV_LABEL_START_SIZE, 8746 taddr - dev->l2ad_evict); 8747 spa_config_enter(dev->l2ad_spa, SCL_L2ARC, dev, 8748 RW_READER); 8749 } 8750 8751 /* 8752 * When rebuilding L2ARC we retrieve the evict hand 8753 * from the header of the device. Of note, l2arc_evict() 8754 * does not actually delete buffers from the cache 8755 * device, but trimming may do so depending on the 8756 * hardware implementation. Thus keeping track of the 8757 * evict hand is useful. 8758 */ 8759 dev->l2ad_evict = MAX(dev->l2ad_evict, taddr); 8760 } 8761 } 8762 8763 retry: 8764 mutex_enter(&dev->l2ad_mtx); 8765 /* 8766 * We have to account for evicted log blocks. Run vdev_space_update() 8767 * on log blocks whose offset (in bytes) is before the evicted offset 8768 * (in bytes) by searching in the list of pointers to log blocks 8769 * present in the L2ARC device. 8770 */ 8771 for (lb_ptr_buf = list_tail(&dev->l2ad_lbptr_list); lb_ptr_buf; 8772 lb_ptr_buf = lb_ptr_buf_prev) { 8773 8774 lb_ptr_buf_prev = list_prev(&dev->l2ad_lbptr_list, lb_ptr_buf); 8775 8776 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 8777 uint64_t asize = L2BLK_GET_PSIZE( 8778 (lb_ptr_buf->lb_ptr)->lbp_prop); 8779 8780 /* 8781 * We don't worry about log blocks left behind (ie 8782 * lbp_payload_start < l2ad_hand) because l2arc_write_buffers() 8783 * will never write more than l2arc_evict() evicts. 8784 */ 8785 if (!all && l2arc_log_blkptr_valid(dev, lb_ptr_buf->lb_ptr)) { 8786 break; 8787 } else { 8788 vdev_space_update(vd, -asize, 0, 0); 8789 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8790 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8791 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8792 lb_ptr_buf); 8793 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8794 list_remove(&dev->l2ad_lbptr_list, lb_ptr_buf); 8795 kmem_free(lb_ptr_buf->lb_ptr, 8796 sizeof (l2arc_log_blkptr_t)); 8797 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8798 } 8799 } 8800 8801 for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) { 8802 hdr_prev = list_prev(buflist, hdr); 8803 8804 ASSERT(!HDR_EMPTY(hdr)); 8805 hash_lock = HDR_LOCK(hdr); 8806 8807 /* 8808 * We cannot use mutex_enter or else we can deadlock 8809 * with l2arc_write_buffers (due to swapping the order 8810 * the hash lock and l2ad_mtx are taken). 8811 */ 8812 if (!mutex_tryenter(hash_lock)) { 8813 /* 8814 * Missed the hash lock. Retry. 8815 */ 8816 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry); 8817 mutex_exit(&dev->l2ad_mtx); 8818 mutex_enter(hash_lock); 8819 mutex_exit(hash_lock); 8820 goto retry; 8821 } 8822 8823 /* 8824 * A header can't be on this list if it doesn't have L2 header. 8825 */ 8826 ASSERT(HDR_HAS_L2HDR(hdr)); 8827 8828 /* Ensure this header has finished being written. */ 8829 ASSERT(!HDR_L2_WRITING(hdr)); 8830 ASSERT(!HDR_L2_WRITE_HEAD(hdr)); 8831 8832 if (!all && (hdr->b_l2hdr.b_daddr >= dev->l2ad_evict || 8833 hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) { 8834 /* 8835 * We've evicted to the target address, 8836 * or the end of the device. 8837 */ 8838 mutex_exit(hash_lock); 8839 break; 8840 } 8841 8842 if (!HDR_HAS_L1HDR(hdr)) { 8843 ASSERT(!HDR_L2_READING(hdr)); 8844 /* 8845 * This doesn't exist in the ARC. Destroy. 8846 * arc_hdr_destroy() will call list_remove() 8847 * and decrement arcstat_l2_lsize. 8848 */ 8849 arc_change_state(arc_anon, hdr); 8850 arc_hdr_destroy(hdr); 8851 } else { 8852 ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only); 8853 ARCSTAT_BUMP(arcstat_l2_evict_l1cached); 8854 /* 8855 * Invalidate issued or about to be issued 8856 * reads, since we may be about to write 8857 * over this location. 8858 */ 8859 if (HDR_L2_READING(hdr)) { 8860 ARCSTAT_BUMP(arcstat_l2_evict_reading); 8861 arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED); 8862 } 8863 8864 arc_hdr_l2hdr_destroy(hdr); 8865 } 8866 mutex_exit(hash_lock); 8867 } 8868 mutex_exit(&dev->l2ad_mtx); 8869 8870 out: 8871 /* 8872 * We need to check if we evict all buffers, otherwise we may iterate 8873 * unnecessarily. 8874 */ 8875 if (!all && rerun) { 8876 /* 8877 * Bump device hand to the device start if it is approaching the 8878 * end. l2arc_evict() has already evicted ahead for this case. 8879 */ 8880 dev->l2ad_hand = dev->l2ad_start; 8881 dev->l2ad_evict = dev->l2ad_start; 8882 dev->l2ad_first = B_FALSE; 8883 goto top; 8884 } 8885 8886 if (!all) { 8887 /* 8888 * In case of cache device removal (all) the following 8889 * assertions may be violated without functional consequences 8890 * as the device is about to be removed. 8891 */ 8892 ASSERT3U(dev->l2ad_hand + distance, <, dev->l2ad_end); 8893 if (!dev->l2ad_first) 8894 ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); 8895 } 8896 } 8897 8898 /* 8899 * Handle any abd transforms that might be required for writing to the L2ARC. 8900 * If successful, this function will always return an abd with the data 8901 * transformed as it is on disk in a new abd of asize bytes. 8902 */ 8903 static int 8904 l2arc_apply_transforms(spa_t *spa, arc_buf_hdr_t *hdr, uint64_t asize, 8905 abd_t **abd_out) 8906 { 8907 int ret; 8908 void *tmp = NULL; 8909 abd_t *cabd = NULL, *eabd = NULL, *to_write = hdr->b_l1hdr.b_pabd; 8910 enum zio_compress compress = HDR_GET_COMPRESS(hdr); 8911 uint64_t psize = HDR_GET_PSIZE(hdr); 8912 uint64_t size = arc_hdr_size(hdr); 8913 boolean_t ismd = HDR_ISTYPE_METADATA(hdr); 8914 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 8915 dsl_crypto_key_t *dck = NULL; 8916 uint8_t mac[ZIO_DATA_MAC_LEN] = { 0 }; 8917 boolean_t no_crypt = B_FALSE; 8918 8919 ASSERT((HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 8920 !HDR_COMPRESSION_ENABLED(hdr)) || 8921 HDR_ENCRYPTED(hdr) || HDR_SHARED_DATA(hdr) || psize != asize); 8922 ASSERT3U(psize, <=, asize); 8923 8924 /* 8925 * If this data simply needs its own buffer, we simply allocate it 8926 * and copy the data. This may be done to eliminate a dependency on a 8927 * shared buffer or to reallocate the buffer to match asize. 8928 */ 8929 if (HDR_HAS_RABD(hdr) && asize != psize) { 8930 ASSERT3U(asize, >=, psize); 8931 to_write = abd_alloc_for_io(asize, ismd); 8932 abd_copy(to_write, hdr->b_crypt_hdr.b_rabd, psize); 8933 if (psize != asize) 8934 abd_zero_off(to_write, psize, asize - psize); 8935 goto out; 8936 } 8937 8938 if ((compress == ZIO_COMPRESS_OFF || HDR_COMPRESSION_ENABLED(hdr)) && 8939 !HDR_ENCRYPTED(hdr)) { 8940 ASSERT3U(size, ==, psize); 8941 to_write = abd_alloc_for_io(asize, ismd); 8942 abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); 8943 if (size != asize) 8944 abd_zero_off(to_write, size, asize - size); 8945 goto out; 8946 } 8947 8948 if (compress != ZIO_COMPRESS_OFF && !HDR_COMPRESSION_ENABLED(hdr)) { 8949 /* 8950 * In some cases, we can wind up with size > asize, so 8951 * we need to opt for the larger allocation option here. 8952 * 8953 * (We also need abd_return_buf_copy in all cases because 8954 * it's an ASSERT() to modify the buffer before returning it 8955 * with arc_return_buf(), and all the compressors 8956 * write things before deciding to fail compression in nearly 8957 * every case.) 8958 */ 8959 uint64_t bufsize = MAX(size, asize); 8960 cabd = abd_alloc_for_io(bufsize, ismd); 8961 tmp = abd_borrow_buf(cabd, bufsize); 8962 8963 psize = zio_compress_data(compress, to_write, &tmp, size, 8964 hdr->b_complevel); 8965 8966 if (psize >= asize) { 8967 psize = HDR_GET_PSIZE(hdr); 8968 abd_return_buf_copy(cabd, tmp, bufsize); 8969 HDR_SET_COMPRESS(hdr, ZIO_COMPRESS_OFF); 8970 to_write = cabd; 8971 abd_copy(to_write, hdr->b_l1hdr.b_pabd, psize); 8972 if (psize != asize) 8973 abd_zero_off(to_write, psize, asize - psize); 8974 goto encrypt; 8975 } 8976 ASSERT3U(psize, <=, HDR_GET_PSIZE(hdr)); 8977 if (psize < asize) 8978 memset((char *)tmp + psize, 0, bufsize - psize); 8979 psize = HDR_GET_PSIZE(hdr); 8980 abd_return_buf_copy(cabd, tmp, bufsize); 8981 to_write = cabd; 8982 } 8983 8984 encrypt: 8985 if (HDR_ENCRYPTED(hdr)) { 8986 eabd = abd_alloc_for_io(asize, ismd); 8987 8988 /* 8989 * If the dataset was disowned before the buffer 8990 * made it to this point, the key to re-encrypt 8991 * it won't be available. In this case we simply 8992 * won't write the buffer to the L2ARC. 8993 */ 8994 ret = spa_keystore_lookup_key(spa, hdr->b_crypt_hdr.b_dsobj, 8995 FTAG, &dck); 8996 if (ret != 0) 8997 goto error; 8998 8999 ret = zio_do_crypt_abd(B_TRUE, &dck->dck_key, 9000 hdr->b_crypt_hdr.b_ot, bswap, hdr->b_crypt_hdr.b_salt, 9001 hdr->b_crypt_hdr.b_iv, mac, psize, to_write, eabd, 9002 &no_crypt); 9003 if (ret != 0) 9004 goto error; 9005 9006 if (no_crypt) 9007 abd_copy(eabd, to_write, psize); 9008 9009 if (psize != asize) 9010 abd_zero_off(eabd, psize, asize - psize); 9011 9012 /* assert that the MAC we got here matches the one we saved */ 9013 ASSERT0(memcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN)); 9014 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9015 9016 if (to_write == cabd) 9017 abd_free(cabd); 9018 9019 to_write = eabd; 9020 } 9021 9022 out: 9023 ASSERT3P(to_write, !=, hdr->b_l1hdr.b_pabd); 9024 *abd_out = to_write; 9025 return (0); 9026 9027 error: 9028 if (dck != NULL) 9029 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9030 if (cabd != NULL) 9031 abd_free(cabd); 9032 if (eabd != NULL) 9033 abd_free(eabd); 9034 9035 *abd_out = NULL; 9036 return (ret); 9037 } 9038 9039 static void 9040 l2arc_blk_fetch_done(zio_t *zio) 9041 { 9042 l2arc_read_callback_t *cb; 9043 9044 cb = zio->io_private; 9045 if (cb->l2rcb_abd != NULL) 9046 abd_free(cb->l2rcb_abd); 9047 kmem_free(cb, sizeof (l2arc_read_callback_t)); 9048 } 9049 9050 /* 9051 * Find and write ARC buffers to the L2ARC device. 9052 * 9053 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid 9054 * for reading until they have completed writing. 9055 * The headroom_boost is an in-out parameter used to maintain headroom boost 9056 * state between calls to this function. 9057 * 9058 * Returns the number of bytes actually written (which may be smaller than 9059 * the delta by which the device hand has changed due to alignment and the 9060 * writing of log blocks). 9061 */ 9062 static uint64_t 9063 l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz) 9064 { 9065 arc_buf_hdr_t *hdr, *hdr_prev, *head; 9066 uint64_t write_asize, write_psize, write_lsize, headroom; 9067 boolean_t full; 9068 l2arc_write_callback_t *cb = NULL; 9069 zio_t *pio, *wzio; 9070 uint64_t guid = spa_load_guid(spa); 9071 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9072 9073 ASSERT3P(dev->l2ad_vdev, !=, NULL); 9074 9075 pio = NULL; 9076 write_lsize = write_asize = write_psize = 0; 9077 full = B_FALSE; 9078 head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE); 9079 arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR); 9080 9081 /* 9082 * Copy buffers for L2ARC writing. 9083 */ 9084 for (int pass = 0; pass < L2ARC_FEED_TYPES; pass++) { 9085 /* 9086 * If pass == 1 or 3, we cache MRU metadata and data 9087 * respectively. 9088 */ 9089 if (l2arc_mfuonly) { 9090 if (pass == 1 || pass == 3) 9091 continue; 9092 } 9093 9094 multilist_sublist_t *mls = l2arc_sublist_lock(pass); 9095 uint64_t passed_sz = 0; 9096 9097 VERIFY3P(mls, !=, NULL); 9098 9099 /* 9100 * L2ARC fast warmup. 9101 * 9102 * Until the ARC is warm and starts to evict, read from the 9103 * head of the ARC lists rather than the tail. 9104 */ 9105 if (arc_warm == B_FALSE) 9106 hdr = multilist_sublist_head(mls); 9107 else 9108 hdr = multilist_sublist_tail(mls); 9109 9110 headroom = target_sz * l2arc_headroom; 9111 if (zfs_compressed_arc_enabled) 9112 headroom = (headroom * l2arc_headroom_boost) / 100; 9113 9114 for (; hdr; hdr = hdr_prev) { 9115 kmutex_t *hash_lock; 9116 abd_t *to_write = NULL; 9117 9118 if (arc_warm == B_FALSE) 9119 hdr_prev = multilist_sublist_next(mls, hdr); 9120 else 9121 hdr_prev = multilist_sublist_prev(mls, hdr); 9122 9123 hash_lock = HDR_LOCK(hdr); 9124 if (!mutex_tryenter(hash_lock)) { 9125 /* 9126 * Skip this buffer rather than waiting. 9127 */ 9128 continue; 9129 } 9130 9131 passed_sz += HDR_GET_LSIZE(hdr); 9132 if (l2arc_headroom != 0 && passed_sz > headroom) { 9133 /* 9134 * Searched too far. 9135 */ 9136 mutex_exit(hash_lock); 9137 break; 9138 } 9139 9140 if (!l2arc_write_eligible(guid, hdr)) { 9141 mutex_exit(hash_lock); 9142 continue; 9143 } 9144 9145 ASSERT(HDR_HAS_L1HDR(hdr)); 9146 9147 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); 9148 ASSERT3U(arc_hdr_size(hdr), >, 0); 9149 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 9150 HDR_HAS_RABD(hdr)); 9151 uint64_t psize = HDR_GET_PSIZE(hdr); 9152 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, 9153 psize); 9154 9155 /* 9156 * If the allocated size of this buffer plus the max 9157 * size for the pending log block exceeds the evicted 9158 * target size, terminate writing buffers for this run. 9159 */ 9160 if (write_asize + asize + 9161 sizeof (l2arc_log_blk_phys_t) > target_sz) { 9162 full = B_TRUE; 9163 mutex_exit(hash_lock); 9164 break; 9165 } 9166 9167 /* 9168 * We rely on the L1 portion of the header below, so 9169 * it's invalid for this header to have been evicted out 9170 * of the ghost cache, prior to being written out. The 9171 * ARC_FLAG_L2_WRITING bit ensures this won't happen. 9172 */ 9173 arc_hdr_set_flags(hdr, ARC_FLAG_L2_WRITING); 9174 9175 /* 9176 * If this header has b_rabd, we can use this since it 9177 * must always match the data exactly as it exists on 9178 * disk. Otherwise, the L2ARC can normally use the 9179 * hdr's data, but if we're sharing data between the 9180 * hdr and one of its bufs, L2ARC needs its own copy of 9181 * the data so that the ZIO below can't race with the 9182 * buf consumer. To ensure that this copy will be 9183 * available for the lifetime of the ZIO and be cleaned 9184 * up afterwards, we add it to the l2arc_free_on_write 9185 * queue. If we need to apply any transforms to the 9186 * data (compression, encryption) we will also need the 9187 * extra buffer. 9188 */ 9189 if (HDR_HAS_RABD(hdr) && psize == asize) { 9190 to_write = hdr->b_crypt_hdr.b_rabd; 9191 } else if ((HDR_COMPRESSION_ENABLED(hdr) || 9192 HDR_GET_COMPRESS(hdr) == ZIO_COMPRESS_OFF) && 9193 !HDR_ENCRYPTED(hdr) && !HDR_SHARED_DATA(hdr) && 9194 psize == asize) { 9195 to_write = hdr->b_l1hdr.b_pabd; 9196 } else { 9197 int ret; 9198 arc_buf_contents_t type = arc_buf_type(hdr); 9199 9200 ret = l2arc_apply_transforms(spa, hdr, asize, 9201 &to_write); 9202 if (ret != 0) { 9203 arc_hdr_clear_flags(hdr, 9204 ARC_FLAG_L2_WRITING); 9205 mutex_exit(hash_lock); 9206 continue; 9207 } 9208 9209 l2arc_free_abd_on_write(to_write, asize, type); 9210 } 9211 9212 if (pio == NULL) { 9213 /* 9214 * Insert a dummy header on the buflist so 9215 * l2arc_write_done() can find where the 9216 * write buffers begin without searching. 9217 */ 9218 mutex_enter(&dev->l2ad_mtx); 9219 list_insert_head(&dev->l2ad_buflist, head); 9220 mutex_exit(&dev->l2ad_mtx); 9221 9222 cb = kmem_alloc( 9223 sizeof (l2arc_write_callback_t), KM_SLEEP); 9224 cb->l2wcb_dev = dev; 9225 cb->l2wcb_head = head; 9226 /* 9227 * Create a list to save allocated abd buffers 9228 * for l2arc_log_blk_commit(). 9229 */ 9230 list_create(&cb->l2wcb_abd_list, 9231 sizeof (l2arc_lb_abd_buf_t), 9232 offsetof(l2arc_lb_abd_buf_t, node)); 9233 pio = zio_root(spa, l2arc_write_done, cb, 9234 ZIO_FLAG_CANFAIL); 9235 } 9236 9237 hdr->b_l2hdr.b_dev = dev; 9238 hdr->b_l2hdr.b_hits = 0; 9239 9240 hdr->b_l2hdr.b_daddr = dev->l2ad_hand; 9241 hdr->b_l2hdr.b_arcs_state = 9242 hdr->b_l1hdr.b_state->arcs_state; 9243 arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR); 9244 9245 mutex_enter(&dev->l2ad_mtx); 9246 list_insert_head(&dev->l2ad_buflist, hdr); 9247 mutex_exit(&dev->l2ad_mtx); 9248 9249 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 9250 arc_hdr_size(hdr), hdr); 9251 9252 wzio = zio_write_phys(pio, dev->l2ad_vdev, 9253 hdr->b_l2hdr.b_daddr, asize, to_write, 9254 ZIO_CHECKSUM_OFF, NULL, hdr, 9255 ZIO_PRIORITY_ASYNC_WRITE, 9256 ZIO_FLAG_CANFAIL, B_FALSE); 9257 9258 write_lsize += HDR_GET_LSIZE(hdr); 9259 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, 9260 zio_t *, wzio); 9261 9262 write_psize += psize; 9263 write_asize += asize; 9264 dev->l2ad_hand += asize; 9265 l2arc_hdr_arcstats_increment(hdr); 9266 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 9267 9268 mutex_exit(hash_lock); 9269 9270 /* 9271 * Append buf info to current log and commit if full. 9272 * arcstat_l2_{size,asize} kstats are updated 9273 * internally. 9274 */ 9275 if (l2arc_log_blk_insert(dev, hdr)) { 9276 /* 9277 * l2ad_hand will be adjusted in 9278 * l2arc_log_blk_commit(). 9279 */ 9280 write_asize += 9281 l2arc_log_blk_commit(dev, pio, cb); 9282 } 9283 9284 zio_nowait(wzio); 9285 } 9286 9287 multilist_sublist_unlock(mls); 9288 9289 if (full == B_TRUE) 9290 break; 9291 } 9292 9293 /* No buffers selected for writing? */ 9294 if (pio == NULL) { 9295 ASSERT0(write_lsize); 9296 ASSERT(!HDR_HAS_L1HDR(head)); 9297 kmem_cache_free(hdr_l2only_cache, head); 9298 9299 /* 9300 * Although we did not write any buffers l2ad_evict may 9301 * have advanced. 9302 */ 9303 if (dev->l2ad_evict != l2dhdr->dh_evict) 9304 l2arc_dev_hdr_update(dev); 9305 9306 return (0); 9307 } 9308 9309 if (!dev->l2ad_first) 9310 ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); 9311 9312 ASSERT3U(write_asize, <=, target_sz); 9313 ARCSTAT_BUMP(arcstat_l2_writes_sent); 9314 ARCSTAT_INCR(arcstat_l2_write_bytes, write_psize); 9315 9316 dev->l2ad_writing = B_TRUE; 9317 (void) zio_wait(pio); 9318 dev->l2ad_writing = B_FALSE; 9319 9320 /* 9321 * Update the device header after the zio completes as 9322 * l2arc_write_done() may have updated the memory holding the log block 9323 * pointers in the device header. 9324 */ 9325 l2arc_dev_hdr_update(dev); 9326 9327 return (write_asize); 9328 } 9329 9330 static boolean_t 9331 l2arc_hdr_limit_reached(void) 9332 { 9333 int64_t s = aggsum_upper_bound(&arc_sums.arcstat_l2_hdr_size); 9334 9335 return (arc_reclaim_needed() || 9336 (s > (arc_warm ? arc_c : arc_c_max) * l2arc_meta_percent / 100)); 9337 } 9338 9339 /* 9340 * This thread feeds the L2ARC at regular intervals. This is the beating 9341 * heart of the L2ARC. 9342 */ 9343 static __attribute__((noreturn)) void 9344 l2arc_feed_thread(void *unused) 9345 { 9346 (void) unused; 9347 callb_cpr_t cpr; 9348 l2arc_dev_t *dev; 9349 spa_t *spa; 9350 uint64_t size, wrote; 9351 clock_t begin, next = ddi_get_lbolt(); 9352 fstrans_cookie_t cookie; 9353 9354 CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG); 9355 9356 mutex_enter(&l2arc_feed_thr_lock); 9357 9358 cookie = spl_fstrans_mark(); 9359 while (l2arc_thread_exit == 0) { 9360 CALLB_CPR_SAFE_BEGIN(&cpr); 9361 (void) cv_timedwait_idle(&l2arc_feed_thr_cv, 9362 &l2arc_feed_thr_lock, next); 9363 CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock); 9364 next = ddi_get_lbolt() + hz; 9365 9366 /* 9367 * Quick check for L2ARC devices. 9368 */ 9369 mutex_enter(&l2arc_dev_mtx); 9370 if (l2arc_ndev == 0) { 9371 mutex_exit(&l2arc_dev_mtx); 9372 continue; 9373 } 9374 mutex_exit(&l2arc_dev_mtx); 9375 begin = ddi_get_lbolt(); 9376 9377 /* 9378 * This selects the next l2arc device to write to, and in 9379 * doing so the next spa to feed from: dev->l2ad_spa. This 9380 * will return NULL if there are now no l2arc devices or if 9381 * they are all faulted. 9382 * 9383 * If a device is returned, its spa's config lock is also 9384 * held to prevent device removal. l2arc_dev_get_next() 9385 * will grab and release l2arc_dev_mtx. 9386 */ 9387 if ((dev = l2arc_dev_get_next()) == NULL) 9388 continue; 9389 9390 spa = dev->l2ad_spa; 9391 ASSERT3P(spa, !=, NULL); 9392 9393 /* 9394 * If the pool is read-only then force the feed thread to 9395 * sleep a little longer. 9396 */ 9397 if (!spa_writeable(spa)) { 9398 next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz; 9399 spa_config_exit(spa, SCL_L2ARC, dev); 9400 continue; 9401 } 9402 9403 /* 9404 * Avoid contributing to memory pressure. 9405 */ 9406 if (l2arc_hdr_limit_reached()) { 9407 ARCSTAT_BUMP(arcstat_l2_abort_lowmem); 9408 spa_config_exit(spa, SCL_L2ARC, dev); 9409 continue; 9410 } 9411 9412 ARCSTAT_BUMP(arcstat_l2_feeds); 9413 9414 size = l2arc_write_size(dev); 9415 9416 /* 9417 * Evict L2ARC buffers that will be overwritten. 9418 */ 9419 l2arc_evict(dev, size, B_FALSE); 9420 9421 /* 9422 * Write ARC buffers. 9423 */ 9424 wrote = l2arc_write_buffers(spa, dev, size); 9425 9426 /* 9427 * Calculate interval between writes. 9428 */ 9429 next = l2arc_write_interval(begin, size, wrote); 9430 spa_config_exit(spa, SCL_L2ARC, dev); 9431 } 9432 spl_fstrans_unmark(cookie); 9433 9434 l2arc_thread_exit = 0; 9435 cv_broadcast(&l2arc_feed_thr_cv); 9436 CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */ 9437 thread_exit(); 9438 } 9439 9440 boolean_t 9441 l2arc_vdev_present(vdev_t *vd) 9442 { 9443 return (l2arc_vdev_get(vd) != NULL); 9444 } 9445 9446 /* 9447 * Returns the l2arc_dev_t associated with a particular vdev_t or NULL if 9448 * the vdev_t isn't an L2ARC device. 9449 */ 9450 l2arc_dev_t * 9451 l2arc_vdev_get(vdev_t *vd) 9452 { 9453 l2arc_dev_t *dev; 9454 9455 mutex_enter(&l2arc_dev_mtx); 9456 for (dev = list_head(l2arc_dev_list); dev != NULL; 9457 dev = list_next(l2arc_dev_list, dev)) { 9458 if (dev->l2ad_vdev == vd) 9459 break; 9460 } 9461 mutex_exit(&l2arc_dev_mtx); 9462 9463 return (dev); 9464 } 9465 9466 static void 9467 l2arc_rebuild_dev(l2arc_dev_t *dev, boolean_t reopen) 9468 { 9469 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9470 uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 9471 spa_t *spa = dev->l2ad_spa; 9472 9473 /* 9474 * The L2ARC has to hold at least the payload of one log block for 9475 * them to be restored (persistent L2ARC). The payload of a log block 9476 * depends on the amount of its log entries. We always write log blocks 9477 * with 1022 entries. How many of them are committed or restored depends 9478 * on the size of the L2ARC device. Thus the maximum payload of 9479 * one log block is 1022 * SPA_MAXBLOCKSIZE = 16GB. If the L2ARC device 9480 * is less than that, we reduce the amount of committed and restored 9481 * log entries per block so as to enable persistence. 9482 */ 9483 if (dev->l2ad_end < l2arc_rebuild_blocks_min_l2size) { 9484 dev->l2ad_log_entries = 0; 9485 } else { 9486 dev->l2ad_log_entries = MIN((dev->l2ad_end - 9487 dev->l2ad_start) >> SPA_MAXBLOCKSHIFT, 9488 L2ARC_LOG_BLK_MAX_ENTRIES); 9489 } 9490 9491 /* 9492 * Read the device header, if an error is returned do not rebuild L2ARC. 9493 */ 9494 if (l2arc_dev_hdr_read(dev) == 0 && dev->l2ad_log_entries > 0) { 9495 /* 9496 * If we are onlining a cache device (vdev_reopen) that was 9497 * still present (l2arc_vdev_present()) and rebuild is enabled, 9498 * we should evict all ARC buffers and pointers to log blocks 9499 * and reclaim their space before restoring its contents to 9500 * L2ARC. 9501 */ 9502 if (reopen) { 9503 if (!l2arc_rebuild_enabled) { 9504 return; 9505 } else { 9506 l2arc_evict(dev, 0, B_TRUE); 9507 /* start a new log block */ 9508 dev->l2ad_log_ent_idx = 0; 9509 dev->l2ad_log_blk_payload_asize = 0; 9510 dev->l2ad_log_blk_payload_start = 0; 9511 } 9512 } 9513 /* 9514 * Just mark the device as pending for a rebuild. We won't 9515 * be starting a rebuild in line here as it would block pool 9516 * import. Instead spa_load_impl will hand that off to an 9517 * async task which will call l2arc_spa_rebuild_start. 9518 */ 9519 dev->l2ad_rebuild = B_TRUE; 9520 } else if (spa_writeable(spa)) { 9521 /* 9522 * In this case TRIM the whole device if l2arc_trim_ahead > 0, 9523 * otherwise create a new header. We zero out the memory holding 9524 * the header to reset dh_start_lbps. If we TRIM the whole 9525 * device the new header will be written by 9526 * vdev_trim_l2arc_thread() at the end of the TRIM to update the 9527 * trim_state in the header too. When reading the header, if 9528 * trim_state is not VDEV_TRIM_COMPLETE and l2arc_trim_ahead > 0 9529 * we opt to TRIM the whole device again. 9530 */ 9531 if (l2arc_trim_ahead > 0) { 9532 dev->l2ad_trim_all = B_TRUE; 9533 } else { 9534 memset(l2dhdr, 0, l2dhdr_asize); 9535 l2arc_dev_hdr_update(dev); 9536 } 9537 } 9538 } 9539 9540 /* 9541 * Add a vdev for use by the L2ARC. By this point the spa has already 9542 * validated the vdev and opened it. 9543 */ 9544 void 9545 l2arc_add_vdev(spa_t *spa, vdev_t *vd) 9546 { 9547 l2arc_dev_t *adddev; 9548 uint64_t l2dhdr_asize; 9549 9550 ASSERT(!l2arc_vdev_present(vd)); 9551 9552 /* 9553 * Create a new l2arc device entry. 9554 */ 9555 adddev = vmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP); 9556 adddev->l2ad_spa = spa; 9557 adddev->l2ad_vdev = vd; 9558 /* leave extra size for an l2arc device header */ 9559 l2dhdr_asize = adddev->l2ad_dev_hdr_asize = 9560 MAX(sizeof (*adddev->l2ad_dev_hdr), 1 << vd->vdev_ashift); 9561 adddev->l2ad_start = VDEV_LABEL_START_SIZE + l2dhdr_asize; 9562 adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd); 9563 ASSERT3U(adddev->l2ad_start, <, adddev->l2ad_end); 9564 adddev->l2ad_hand = adddev->l2ad_start; 9565 adddev->l2ad_evict = adddev->l2ad_start; 9566 adddev->l2ad_first = B_TRUE; 9567 adddev->l2ad_writing = B_FALSE; 9568 adddev->l2ad_trim_all = B_FALSE; 9569 list_link_init(&adddev->l2ad_node); 9570 adddev->l2ad_dev_hdr = kmem_zalloc(l2dhdr_asize, KM_SLEEP); 9571 9572 mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL); 9573 /* 9574 * This is a list of all ARC buffers that are still valid on the 9575 * device. 9576 */ 9577 list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t), 9578 offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node)); 9579 9580 /* 9581 * This is a list of pointers to log blocks that are still present 9582 * on the device. 9583 */ 9584 list_create(&adddev->l2ad_lbptr_list, sizeof (l2arc_lb_ptr_buf_t), 9585 offsetof(l2arc_lb_ptr_buf_t, node)); 9586 9587 vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand); 9588 zfs_refcount_create(&adddev->l2ad_alloc); 9589 zfs_refcount_create(&adddev->l2ad_lb_asize); 9590 zfs_refcount_create(&adddev->l2ad_lb_count); 9591 9592 /* 9593 * Decide if dev is eligible for L2ARC rebuild or whole device 9594 * trimming. This has to happen before the device is added in the 9595 * cache device list and l2arc_dev_mtx is released. Otherwise 9596 * l2arc_feed_thread() might already start writing on the 9597 * device. 9598 */ 9599 l2arc_rebuild_dev(adddev, B_FALSE); 9600 9601 /* 9602 * Add device to global list 9603 */ 9604 mutex_enter(&l2arc_dev_mtx); 9605 list_insert_head(l2arc_dev_list, adddev); 9606 atomic_inc_64(&l2arc_ndev); 9607 mutex_exit(&l2arc_dev_mtx); 9608 } 9609 9610 /* 9611 * Decide if a vdev is eligible for L2ARC rebuild, called from vdev_reopen() 9612 * in case of onlining a cache device. 9613 */ 9614 void 9615 l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen) 9616 { 9617 l2arc_dev_t *dev = NULL; 9618 9619 dev = l2arc_vdev_get(vd); 9620 ASSERT3P(dev, !=, NULL); 9621 9622 /* 9623 * In contrast to l2arc_add_vdev() we do not have to worry about 9624 * l2arc_feed_thread() invalidating previous content when onlining a 9625 * cache device. The device parameters (l2ad*) are not cleared when 9626 * offlining the device and writing new buffers will not invalidate 9627 * all previous content. In worst case only buffers that have not had 9628 * their log block written to the device will be lost. 9629 * When onlining the cache device (ie offline->online without exporting 9630 * the pool in between) this happens: 9631 * vdev_reopen() -> vdev_open() -> l2arc_rebuild_vdev() 9632 * | | 9633 * vdev_is_dead() = B_FALSE l2ad_rebuild = B_TRUE 9634 * During the time where vdev_is_dead = B_FALSE and until l2ad_rebuild 9635 * is set to B_TRUE we might write additional buffers to the device. 9636 */ 9637 l2arc_rebuild_dev(dev, reopen); 9638 } 9639 9640 /* 9641 * Remove a vdev from the L2ARC. 9642 */ 9643 void 9644 l2arc_remove_vdev(vdev_t *vd) 9645 { 9646 l2arc_dev_t *remdev = NULL; 9647 9648 /* 9649 * Find the device by vdev 9650 */ 9651 remdev = l2arc_vdev_get(vd); 9652 ASSERT3P(remdev, !=, NULL); 9653 9654 /* 9655 * Cancel any ongoing or scheduled rebuild. 9656 */ 9657 mutex_enter(&l2arc_rebuild_thr_lock); 9658 if (remdev->l2ad_rebuild_began == B_TRUE) { 9659 remdev->l2ad_rebuild_cancel = B_TRUE; 9660 while (remdev->l2ad_rebuild == B_TRUE) 9661 cv_wait(&l2arc_rebuild_thr_cv, &l2arc_rebuild_thr_lock); 9662 } 9663 mutex_exit(&l2arc_rebuild_thr_lock); 9664 9665 /* 9666 * Remove device from global list 9667 */ 9668 mutex_enter(&l2arc_dev_mtx); 9669 list_remove(l2arc_dev_list, remdev); 9670 l2arc_dev_last = NULL; /* may have been invalidated */ 9671 atomic_dec_64(&l2arc_ndev); 9672 mutex_exit(&l2arc_dev_mtx); 9673 9674 /* 9675 * Clear all buflists and ARC references. L2ARC device flush. 9676 */ 9677 l2arc_evict(remdev, 0, B_TRUE); 9678 list_destroy(&remdev->l2ad_buflist); 9679 ASSERT(list_is_empty(&remdev->l2ad_lbptr_list)); 9680 list_destroy(&remdev->l2ad_lbptr_list); 9681 mutex_destroy(&remdev->l2ad_mtx); 9682 zfs_refcount_destroy(&remdev->l2ad_alloc); 9683 zfs_refcount_destroy(&remdev->l2ad_lb_asize); 9684 zfs_refcount_destroy(&remdev->l2ad_lb_count); 9685 kmem_free(remdev->l2ad_dev_hdr, remdev->l2ad_dev_hdr_asize); 9686 vmem_free(remdev, sizeof (l2arc_dev_t)); 9687 } 9688 9689 void 9690 l2arc_init(void) 9691 { 9692 l2arc_thread_exit = 0; 9693 l2arc_ndev = 0; 9694 9695 mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9696 cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL); 9697 mutex_init(&l2arc_rebuild_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9698 cv_init(&l2arc_rebuild_thr_cv, NULL, CV_DEFAULT, NULL); 9699 mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL); 9700 mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL); 9701 9702 l2arc_dev_list = &L2ARC_dev_list; 9703 l2arc_free_on_write = &L2ARC_free_on_write; 9704 list_create(l2arc_dev_list, sizeof (l2arc_dev_t), 9705 offsetof(l2arc_dev_t, l2ad_node)); 9706 list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t), 9707 offsetof(l2arc_data_free_t, l2df_list_node)); 9708 } 9709 9710 void 9711 l2arc_fini(void) 9712 { 9713 mutex_destroy(&l2arc_feed_thr_lock); 9714 cv_destroy(&l2arc_feed_thr_cv); 9715 mutex_destroy(&l2arc_rebuild_thr_lock); 9716 cv_destroy(&l2arc_rebuild_thr_cv); 9717 mutex_destroy(&l2arc_dev_mtx); 9718 mutex_destroy(&l2arc_free_on_write_mtx); 9719 9720 list_destroy(l2arc_dev_list); 9721 list_destroy(l2arc_free_on_write); 9722 } 9723 9724 void 9725 l2arc_start(void) 9726 { 9727 if (!(spa_mode_global & SPA_MODE_WRITE)) 9728 return; 9729 9730 (void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0, 9731 TS_RUN, defclsyspri); 9732 } 9733 9734 void 9735 l2arc_stop(void) 9736 { 9737 if (!(spa_mode_global & SPA_MODE_WRITE)) 9738 return; 9739 9740 mutex_enter(&l2arc_feed_thr_lock); 9741 cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */ 9742 l2arc_thread_exit = 1; 9743 while (l2arc_thread_exit != 0) 9744 cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock); 9745 mutex_exit(&l2arc_feed_thr_lock); 9746 } 9747 9748 /* 9749 * Punches out rebuild threads for the L2ARC devices in a spa. This should 9750 * be called after pool import from the spa async thread, since starting 9751 * these threads directly from spa_import() will make them part of the 9752 * "zpool import" context and delay process exit (and thus pool import). 9753 */ 9754 void 9755 l2arc_spa_rebuild_start(spa_t *spa) 9756 { 9757 ASSERT(MUTEX_HELD(&spa_namespace_lock)); 9758 9759 /* 9760 * Locate the spa's l2arc devices and kick off rebuild threads. 9761 */ 9762 for (int i = 0; i < spa->spa_l2cache.sav_count; i++) { 9763 l2arc_dev_t *dev = 9764 l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]); 9765 if (dev == NULL) { 9766 /* Don't attempt a rebuild if the vdev is UNAVAIL */ 9767 continue; 9768 } 9769 mutex_enter(&l2arc_rebuild_thr_lock); 9770 if (dev->l2ad_rebuild && !dev->l2ad_rebuild_cancel) { 9771 dev->l2ad_rebuild_began = B_TRUE; 9772 (void) thread_create(NULL, 0, l2arc_dev_rebuild_thread, 9773 dev, 0, &p0, TS_RUN, minclsyspri); 9774 } 9775 mutex_exit(&l2arc_rebuild_thr_lock); 9776 } 9777 } 9778 9779 /* 9780 * Main entry point for L2ARC rebuilding. 9781 */ 9782 static __attribute__((noreturn)) void 9783 l2arc_dev_rebuild_thread(void *arg) 9784 { 9785 l2arc_dev_t *dev = arg; 9786 9787 VERIFY(!dev->l2ad_rebuild_cancel); 9788 VERIFY(dev->l2ad_rebuild); 9789 (void) l2arc_rebuild(dev); 9790 mutex_enter(&l2arc_rebuild_thr_lock); 9791 dev->l2ad_rebuild_began = B_FALSE; 9792 dev->l2ad_rebuild = B_FALSE; 9793 mutex_exit(&l2arc_rebuild_thr_lock); 9794 9795 thread_exit(); 9796 } 9797 9798 /* 9799 * This function implements the actual L2ARC metadata rebuild. It: 9800 * starts reading the log block chain and restores each block's contents 9801 * to memory (reconstructing arc_buf_hdr_t's). 9802 * 9803 * Operation stops under any of the following conditions: 9804 * 9805 * 1) We reach the end of the log block chain. 9806 * 2) We encounter *any* error condition (cksum errors, io errors) 9807 */ 9808 static int 9809 l2arc_rebuild(l2arc_dev_t *dev) 9810 { 9811 vdev_t *vd = dev->l2ad_vdev; 9812 spa_t *spa = vd->vdev_spa; 9813 int err = 0; 9814 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9815 l2arc_log_blk_phys_t *this_lb, *next_lb; 9816 zio_t *this_io = NULL, *next_io = NULL; 9817 l2arc_log_blkptr_t lbps[2]; 9818 l2arc_lb_ptr_buf_t *lb_ptr_buf; 9819 boolean_t lock_held; 9820 9821 this_lb = vmem_zalloc(sizeof (*this_lb), KM_SLEEP); 9822 next_lb = vmem_zalloc(sizeof (*next_lb), KM_SLEEP); 9823 9824 /* 9825 * We prevent device removal while issuing reads to the device, 9826 * then during the rebuilding phases we drop this lock again so 9827 * that a spa_unload or device remove can be initiated - this is 9828 * safe, because the spa will signal us to stop before removing 9829 * our device and wait for us to stop. 9830 */ 9831 spa_config_enter(spa, SCL_L2ARC, vd, RW_READER); 9832 lock_held = B_TRUE; 9833 9834 /* 9835 * Retrieve the persistent L2ARC device state. 9836 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9837 */ 9838 dev->l2ad_evict = MAX(l2dhdr->dh_evict, dev->l2ad_start); 9839 dev->l2ad_hand = MAX(l2dhdr->dh_start_lbps[0].lbp_daddr + 9840 L2BLK_GET_PSIZE((&l2dhdr->dh_start_lbps[0])->lbp_prop), 9841 dev->l2ad_start); 9842 dev->l2ad_first = !!(l2dhdr->dh_flags & L2ARC_DEV_HDR_EVICT_FIRST); 9843 9844 vd->vdev_trim_action_time = l2dhdr->dh_trim_action_time; 9845 vd->vdev_trim_state = l2dhdr->dh_trim_state; 9846 9847 /* 9848 * In case the zfs module parameter l2arc_rebuild_enabled is false 9849 * we do not start the rebuild process. 9850 */ 9851 if (!l2arc_rebuild_enabled) 9852 goto out; 9853 9854 /* Prepare the rebuild process */ 9855 memcpy(lbps, l2dhdr->dh_start_lbps, sizeof (lbps)); 9856 9857 /* Start the rebuild process */ 9858 for (;;) { 9859 if (!l2arc_log_blkptr_valid(dev, &lbps[0])) 9860 break; 9861 9862 if ((err = l2arc_log_blk_read(dev, &lbps[0], &lbps[1], 9863 this_lb, next_lb, this_io, &next_io)) != 0) 9864 goto out; 9865 9866 /* 9867 * Our memory pressure valve. If the system is running low 9868 * on memory, rather than swamping memory with new ARC buf 9869 * hdrs, we opt not to rebuild the L2ARC. At this point, 9870 * however, we have already set up our L2ARC dev to chain in 9871 * new metadata log blocks, so the user may choose to offline/ 9872 * online the L2ARC dev at a later time (or re-import the pool) 9873 * to reconstruct it (when there's less memory pressure). 9874 */ 9875 if (l2arc_hdr_limit_reached()) { 9876 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_lowmem); 9877 cmn_err(CE_NOTE, "System running low on memory, " 9878 "aborting L2ARC rebuild."); 9879 err = SET_ERROR(ENOMEM); 9880 goto out; 9881 } 9882 9883 spa_config_exit(spa, SCL_L2ARC, vd); 9884 lock_held = B_FALSE; 9885 9886 /* 9887 * Now that we know that the next_lb checks out alright, we 9888 * can start reconstruction from this log block. 9889 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9890 */ 9891 uint64_t asize = L2BLK_GET_PSIZE((&lbps[0])->lbp_prop); 9892 l2arc_log_blk_restore(dev, this_lb, asize); 9893 9894 /* 9895 * log block restored, include its pointer in the list of 9896 * pointers to log blocks present in the L2ARC device. 9897 */ 9898 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 9899 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), 9900 KM_SLEEP); 9901 memcpy(lb_ptr_buf->lb_ptr, &lbps[0], 9902 sizeof (l2arc_log_blkptr_t)); 9903 mutex_enter(&dev->l2ad_mtx); 9904 list_insert_tail(&dev->l2ad_lbptr_list, lb_ptr_buf); 9905 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 9906 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 9907 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 9908 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 9909 mutex_exit(&dev->l2ad_mtx); 9910 vdev_space_update(vd, asize, 0, 0); 9911 9912 /* 9913 * Protection against loops of log blocks: 9914 * 9915 * l2ad_hand l2ad_evict 9916 * V V 9917 * l2ad_start |=======================================| l2ad_end 9918 * -----|||----|||---|||----||| 9919 * (3) (2) (1) (0) 9920 * ---|||---|||----|||---||| 9921 * (7) (6) (5) (4) 9922 * 9923 * In this situation the pointer of log block (4) passes 9924 * l2arc_log_blkptr_valid() but the log block should not be 9925 * restored as it is overwritten by the payload of log block 9926 * (0). Only log blocks (0)-(3) should be restored. We check 9927 * whether l2ad_evict lies in between the payload starting 9928 * offset of the next log block (lbps[1].lbp_payload_start) 9929 * and the payload starting offset of the present log block 9930 * (lbps[0].lbp_payload_start). If true and this isn't the 9931 * first pass, we are looping from the beginning and we should 9932 * stop. 9933 */ 9934 if (l2arc_range_check_overlap(lbps[1].lbp_payload_start, 9935 lbps[0].lbp_payload_start, dev->l2ad_evict) && 9936 !dev->l2ad_first) 9937 goto out; 9938 9939 kpreempt(KPREEMPT_SYNC); 9940 for (;;) { 9941 mutex_enter(&l2arc_rebuild_thr_lock); 9942 if (dev->l2ad_rebuild_cancel) { 9943 dev->l2ad_rebuild = B_FALSE; 9944 cv_signal(&l2arc_rebuild_thr_cv); 9945 mutex_exit(&l2arc_rebuild_thr_lock); 9946 err = SET_ERROR(ECANCELED); 9947 goto out; 9948 } 9949 mutex_exit(&l2arc_rebuild_thr_lock); 9950 if (spa_config_tryenter(spa, SCL_L2ARC, vd, 9951 RW_READER)) { 9952 lock_held = B_TRUE; 9953 break; 9954 } 9955 /* 9956 * L2ARC config lock held by somebody in writer, 9957 * possibly due to them trying to remove us. They'll 9958 * likely to want us to shut down, so after a little 9959 * delay, we check l2ad_rebuild_cancel and retry 9960 * the lock again. 9961 */ 9962 delay(1); 9963 } 9964 9965 /* 9966 * Continue with the next log block. 9967 */ 9968 lbps[0] = lbps[1]; 9969 lbps[1] = this_lb->lb_prev_lbp; 9970 PTR_SWAP(this_lb, next_lb); 9971 this_io = next_io; 9972 next_io = NULL; 9973 } 9974 9975 if (this_io != NULL) 9976 l2arc_log_blk_fetch_abort(this_io); 9977 out: 9978 if (next_io != NULL) 9979 l2arc_log_blk_fetch_abort(next_io); 9980 vmem_free(this_lb, sizeof (*this_lb)); 9981 vmem_free(next_lb, sizeof (*next_lb)); 9982 9983 if (!l2arc_rebuild_enabled) { 9984 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9985 "disabled"); 9986 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) > 0) { 9987 ARCSTAT_BUMP(arcstat_l2_rebuild_success); 9988 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9989 "successful, restored %llu blocks", 9990 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 9991 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) == 0) { 9992 /* 9993 * No error but also nothing restored, meaning the lbps array 9994 * in the device header points to invalid/non-present log 9995 * blocks. Reset the header. 9996 */ 9997 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9998 "no valid log blocks"); 9999 memset(l2dhdr, 0, dev->l2ad_dev_hdr_asize); 10000 l2arc_dev_hdr_update(dev); 10001 } else if (err == ECANCELED) { 10002 /* 10003 * In case the rebuild was canceled do not log to spa history 10004 * log as the pool may be in the process of being removed. 10005 */ 10006 zfs_dbgmsg("L2ARC rebuild aborted, restored %llu blocks", 10007 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10008 } else if (err != 0) { 10009 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10010 "aborted, restored %llu blocks", 10011 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10012 } 10013 10014 if (lock_held) 10015 spa_config_exit(spa, SCL_L2ARC, vd); 10016 10017 return (err); 10018 } 10019 10020 /* 10021 * Attempts to read the device header on the provided L2ARC device and writes 10022 * it to `hdr'. On success, this function returns 0, otherwise the appropriate 10023 * error code is returned. 10024 */ 10025 static int 10026 l2arc_dev_hdr_read(l2arc_dev_t *dev) 10027 { 10028 int err; 10029 uint64_t guid; 10030 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10031 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 10032 abd_t *abd; 10033 10034 guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10035 10036 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10037 10038 err = zio_wait(zio_read_phys(NULL, dev->l2ad_vdev, 10039 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, 10040 ZIO_CHECKSUM_LABEL, NULL, NULL, ZIO_PRIORITY_SYNC_READ, 10041 ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | 10042 ZIO_FLAG_SPECULATIVE, B_FALSE)); 10043 10044 abd_free(abd); 10045 10046 if (err != 0) { 10047 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_dh_errors); 10048 zfs_dbgmsg("L2ARC IO error (%d) while reading device header, " 10049 "vdev guid: %llu", err, 10050 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10051 return (err); 10052 } 10053 10054 if (l2dhdr->dh_magic == BSWAP_64(L2ARC_DEV_HDR_MAGIC)) 10055 byteswap_uint64_array(l2dhdr, sizeof (*l2dhdr)); 10056 10057 if (l2dhdr->dh_magic != L2ARC_DEV_HDR_MAGIC || 10058 l2dhdr->dh_spa_guid != guid || 10059 l2dhdr->dh_vdev_guid != dev->l2ad_vdev->vdev_guid || 10060 l2dhdr->dh_version != L2ARC_PERSISTENT_VERSION || 10061 l2dhdr->dh_log_entries != dev->l2ad_log_entries || 10062 l2dhdr->dh_end != dev->l2ad_end || 10063 !l2arc_range_check_overlap(dev->l2ad_start, dev->l2ad_end, 10064 l2dhdr->dh_evict) || 10065 (l2dhdr->dh_trim_state != VDEV_TRIM_COMPLETE && 10066 l2arc_trim_ahead > 0)) { 10067 /* 10068 * Attempt to rebuild a device containing no actual dev hdr 10069 * or containing a header from some other pool or from another 10070 * version of persistent L2ARC. 10071 */ 10072 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_unsupported); 10073 return (SET_ERROR(ENOTSUP)); 10074 } 10075 10076 return (0); 10077 } 10078 10079 /* 10080 * Reads L2ARC log blocks from storage and validates their contents. 10081 * 10082 * This function implements a simple fetcher to make sure that while 10083 * we're processing one buffer the L2ARC is already fetching the next 10084 * one in the chain. 10085 * 10086 * The arguments this_lp and next_lp point to the current and next log block 10087 * address in the block chain. Similarly, this_lb and next_lb hold the 10088 * l2arc_log_blk_phys_t's of the current and next L2ARC blk. 10089 * 10090 * The `this_io' and `next_io' arguments are used for block fetching. 10091 * When issuing the first blk IO during rebuild, you should pass NULL for 10092 * `this_io'. This function will then issue a sync IO to read the block and 10093 * also issue an async IO to fetch the next block in the block chain. The 10094 * fetched IO is returned in `next_io'. On subsequent calls to this 10095 * function, pass the value returned in `next_io' from the previous call 10096 * as `this_io' and a fresh `next_io' pointer to hold the next fetch IO. 10097 * Prior to the call, you should initialize your `next_io' pointer to be 10098 * NULL. If no fetch IO was issued, the pointer is left set at NULL. 10099 * 10100 * On success, this function returns 0, otherwise it returns an appropriate 10101 * error code. On error the fetching IO is aborted and cleared before 10102 * returning from this function. Therefore, if we return `success', the 10103 * caller can assume that we have taken care of cleanup of fetch IOs. 10104 */ 10105 static int 10106 l2arc_log_blk_read(l2arc_dev_t *dev, 10107 const l2arc_log_blkptr_t *this_lbp, const l2arc_log_blkptr_t *next_lbp, 10108 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 10109 zio_t *this_io, zio_t **next_io) 10110 { 10111 int err = 0; 10112 zio_cksum_t cksum; 10113 abd_t *abd = NULL; 10114 uint64_t asize; 10115 10116 ASSERT(this_lbp != NULL && next_lbp != NULL); 10117 ASSERT(this_lb != NULL && next_lb != NULL); 10118 ASSERT(next_io != NULL && *next_io == NULL); 10119 ASSERT(l2arc_log_blkptr_valid(dev, this_lbp)); 10120 10121 /* 10122 * Check to see if we have issued the IO for this log block in a 10123 * previous run. If not, this is the first call, so issue it now. 10124 */ 10125 if (this_io == NULL) { 10126 this_io = l2arc_log_blk_fetch(dev->l2ad_vdev, this_lbp, 10127 this_lb); 10128 } 10129 10130 /* 10131 * Peek to see if we can start issuing the next IO immediately. 10132 */ 10133 if (l2arc_log_blkptr_valid(dev, next_lbp)) { 10134 /* 10135 * Start issuing IO for the next log block early - this 10136 * should help keep the L2ARC device busy while we 10137 * decompress and restore this log block. 10138 */ 10139 *next_io = l2arc_log_blk_fetch(dev->l2ad_vdev, next_lbp, 10140 next_lb); 10141 } 10142 10143 /* Wait for the IO to read this log block to complete */ 10144 if ((err = zio_wait(this_io)) != 0) { 10145 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_io_errors); 10146 zfs_dbgmsg("L2ARC IO error (%d) while reading log block, " 10147 "offset: %llu, vdev guid: %llu", err, 10148 (u_longlong_t)this_lbp->lbp_daddr, 10149 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10150 goto cleanup; 10151 } 10152 10153 /* 10154 * Make sure the buffer checks out. 10155 * L2BLK_GET_PSIZE returns aligned size for log blocks. 10156 */ 10157 asize = L2BLK_GET_PSIZE((this_lbp)->lbp_prop); 10158 fletcher_4_native(this_lb, asize, NULL, &cksum); 10159 if (!ZIO_CHECKSUM_EQUAL(cksum, this_lbp->lbp_cksum)) { 10160 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_cksum_lb_errors); 10161 zfs_dbgmsg("L2ARC log block cksum failed, offset: %llu, " 10162 "vdev guid: %llu, l2ad_hand: %llu, l2ad_evict: %llu", 10163 (u_longlong_t)this_lbp->lbp_daddr, 10164 (u_longlong_t)dev->l2ad_vdev->vdev_guid, 10165 (u_longlong_t)dev->l2ad_hand, 10166 (u_longlong_t)dev->l2ad_evict); 10167 err = SET_ERROR(ECKSUM); 10168 goto cleanup; 10169 } 10170 10171 /* Now we can take our time decoding this buffer */ 10172 switch (L2BLK_GET_COMPRESS((this_lbp)->lbp_prop)) { 10173 case ZIO_COMPRESS_OFF: 10174 break; 10175 case ZIO_COMPRESS_LZ4: 10176 abd = abd_alloc_for_io(asize, B_TRUE); 10177 abd_copy_from_buf_off(abd, this_lb, 0, asize); 10178 if ((err = zio_decompress_data( 10179 L2BLK_GET_COMPRESS((this_lbp)->lbp_prop), 10180 abd, this_lb, asize, sizeof (*this_lb), NULL)) != 0) { 10181 err = SET_ERROR(EINVAL); 10182 goto cleanup; 10183 } 10184 break; 10185 default: 10186 err = SET_ERROR(EINVAL); 10187 goto cleanup; 10188 } 10189 if (this_lb->lb_magic == BSWAP_64(L2ARC_LOG_BLK_MAGIC)) 10190 byteswap_uint64_array(this_lb, sizeof (*this_lb)); 10191 if (this_lb->lb_magic != L2ARC_LOG_BLK_MAGIC) { 10192 err = SET_ERROR(EINVAL); 10193 goto cleanup; 10194 } 10195 cleanup: 10196 /* Abort an in-flight fetch I/O in case of error */ 10197 if (err != 0 && *next_io != NULL) { 10198 l2arc_log_blk_fetch_abort(*next_io); 10199 *next_io = NULL; 10200 } 10201 if (abd != NULL) 10202 abd_free(abd); 10203 return (err); 10204 } 10205 10206 /* 10207 * Restores the payload of a log block to ARC. This creates empty ARC hdr 10208 * entries which only contain an l2arc hdr, essentially restoring the 10209 * buffers to their L2ARC evicted state. This function also updates space 10210 * usage on the L2ARC vdev to make sure it tracks restored buffers. 10211 */ 10212 static void 10213 l2arc_log_blk_restore(l2arc_dev_t *dev, const l2arc_log_blk_phys_t *lb, 10214 uint64_t lb_asize) 10215 { 10216 uint64_t size = 0, asize = 0; 10217 uint64_t log_entries = dev->l2ad_log_entries; 10218 10219 /* 10220 * Usually arc_adapt() is called only for data, not headers, but 10221 * since we may allocate significant amount of memory here, let ARC 10222 * grow its arc_c. 10223 */ 10224 arc_adapt(log_entries * HDR_L2ONLY_SIZE); 10225 10226 for (int i = log_entries - 1; i >= 0; i--) { 10227 /* 10228 * Restore goes in the reverse temporal direction to preserve 10229 * correct temporal ordering of buffers in the l2ad_buflist. 10230 * l2arc_hdr_restore also does a list_insert_tail instead of 10231 * list_insert_head on the l2ad_buflist: 10232 * 10233 * LIST l2ad_buflist LIST 10234 * HEAD <------ (time) ------ TAIL 10235 * direction +-----+-----+-----+-----+-----+ direction 10236 * of l2arc <== | buf | buf | buf | buf | buf | ===> of rebuild 10237 * fill +-----+-----+-----+-----+-----+ 10238 * ^ ^ 10239 * | | 10240 * | | 10241 * l2arc_feed_thread l2arc_rebuild 10242 * will place new bufs here restores bufs here 10243 * 10244 * During l2arc_rebuild() the device is not used by 10245 * l2arc_feed_thread() as dev->l2ad_rebuild is set to true. 10246 */ 10247 size += L2BLK_GET_LSIZE((&lb->lb_entries[i])->le_prop); 10248 asize += vdev_psize_to_asize(dev->l2ad_vdev, 10249 L2BLK_GET_PSIZE((&lb->lb_entries[i])->le_prop)); 10250 l2arc_hdr_restore(&lb->lb_entries[i], dev); 10251 } 10252 10253 /* 10254 * Record rebuild stats: 10255 * size Logical size of restored buffers in the L2ARC 10256 * asize Aligned size of restored buffers in the L2ARC 10257 */ 10258 ARCSTAT_INCR(arcstat_l2_rebuild_size, size); 10259 ARCSTAT_INCR(arcstat_l2_rebuild_asize, asize); 10260 ARCSTAT_INCR(arcstat_l2_rebuild_bufs, log_entries); 10261 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, lb_asize); 10262 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, asize / lb_asize); 10263 ARCSTAT_BUMP(arcstat_l2_rebuild_log_blks); 10264 } 10265 10266 /* 10267 * Restores a single ARC buf hdr from a log entry. The ARC buffer is put 10268 * into a state indicating that it has been evicted to L2ARC. 10269 */ 10270 static void 10271 l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, l2arc_dev_t *dev) 10272 { 10273 arc_buf_hdr_t *hdr, *exists; 10274 kmutex_t *hash_lock; 10275 arc_buf_contents_t type = L2BLK_GET_TYPE((le)->le_prop); 10276 uint64_t asize; 10277 10278 /* 10279 * Do all the allocation before grabbing any locks, this lets us 10280 * sleep if memory is full and we don't have to deal with failed 10281 * allocations. 10282 */ 10283 hdr = arc_buf_alloc_l2only(L2BLK_GET_LSIZE((le)->le_prop), type, 10284 dev, le->le_dva, le->le_daddr, 10285 L2BLK_GET_PSIZE((le)->le_prop), le->le_birth, 10286 L2BLK_GET_COMPRESS((le)->le_prop), le->le_complevel, 10287 L2BLK_GET_PROTECTED((le)->le_prop), 10288 L2BLK_GET_PREFETCH((le)->le_prop), 10289 L2BLK_GET_STATE((le)->le_prop)); 10290 asize = vdev_psize_to_asize(dev->l2ad_vdev, 10291 L2BLK_GET_PSIZE((le)->le_prop)); 10292 10293 /* 10294 * vdev_space_update() has to be called before arc_hdr_destroy() to 10295 * avoid underflow since the latter also calls vdev_space_update(). 10296 */ 10297 l2arc_hdr_arcstats_increment(hdr); 10298 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10299 10300 mutex_enter(&dev->l2ad_mtx); 10301 list_insert_tail(&dev->l2ad_buflist, hdr); 10302 (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr); 10303 mutex_exit(&dev->l2ad_mtx); 10304 10305 exists = buf_hash_insert(hdr, &hash_lock); 10306 if (exists) { 10307 /* Buffer was already cached, no need to restore it. */ 10308 arc_hdr_destroy(hdr); 10309 /* 10310 * If the buffer is already cached, check whether it has 10311 * L2ARC metadata. If not, enter them and update the flag. 10312 * This is important is case of onlining a cache device, since 10313 * we previously evicted all L2ARC metadata from ARC. 10314 */ 10315 if (!HDR_HAS_L2HDR(exists)) { 10316 arc_hdr_set_flags(exists, ARC_FLAG_HAS_L2HDR); 10317 exists->b_l2hdr.b_dev = dev; 10318 exists->b_l2hdr.b_daddr = le->le_daddr; 10319 exists->b_l2hdr.b_arcs_state = 10320 L2BLK_GET_STATE((le)->le_prop); 10321 mutex_enter(&dev->l2ad_mtx); 10322 list_insert_tail(&dev->l2ad_buflist, exists); 10323 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 10324 arc_hdr_size(exists), exists); 10325 mutex_exit(&dev->l2ad_mtx); 10326 l2arc_hdr_arcstats_increment(exists); 10327 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10328 } 10329 ARCSTAT_BUMP(arcstat_l2_rebuild_bufs_precached); 10330 } 10331 10332 mutex_exit(hash_lock); 10333 } 10334 10335 /* 10336 * Starts an asynchronous read IO to read a log block. This is used in log 10337 * block reconstruction to start reading the next block before we are done 10338 * decoding and reconstructing the current block, to keep the l2arc device 10339 * nice and hot with read IO to process. 10340 * The returned zio will contain a newly allocated memory buffers for the IO 10341 * data which should then be freed by the caller once the zio is no longer 10342 * needed (i.e. due to it having completed). If you wish to abort this 10343 * zio, you should do so using l2arc_log_blk_fetch_abort, which takes 10344 * care of disposing of the allocated buffers correctly. 10345 */ 10346 static zio_t * 10347 l2arc_log_blk_fetch(vdev_t *vd, const l2arc_log_blkptr_t *lbp, 10348 l2arc_log_blk_phys_t *lb) 10349 { 10350 uint32_t asize; 10351 zio_t *pio; 10352 l2arc_read_callback_t *cb; 10353 10354 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10355 asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10356 ASSERT(asize <= sizeof (l2arc_log_blk_phys_t)); 10357 10358 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), KM_SLEEP); 10359 cb->l2rcb_abd = abd_get_from_buf(lb, asize); 10360 pio = zio_root(vd->vdev_spa, l2arc_blk_fetch_done, cb, 10361 ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY); 10362 (void) zio_nowait(zio_read_phys(pio, vd, lbp->lbp_daddr, asize, 10363 cb->l2rcb_abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10364 ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_CANFAIL | 10365 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY, B_FALSE)); 10366 10367 return (pio); 10368 } 10369 10370 /* 10371 * Aborts a zio returned from l2arc_log_blk_fetch and frees the data 10372 * buffers allocated for it. 10373 */ 10374 static void 10375 l2arc_log_blk_fetch_abort(zio_t *zio) 10376 { 10377 (void) zio_wait(zio); 10378 } 10379 10380 /* 10381 * Creates a zio to update the device header on an l2arc device. 10382 */ 10383 void 10384 l2arc_dev_hdr_update(l2arc_dev_t *dev) 10385 { 10386 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10387 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 10388 abd_t *abd; 10389 int err; 10390 10391 VERIFY(spa_config_held(dev->l2ad_spa, SCL_STATE_ALL, RW_READER)); 10392 10393 l2dhdr->dh_magic = L2ARC_DEV_HDR_MAGIC; 10394 l2dhdr->dh_version = L2ARC_PERSISTENT_VERSION; 10395 l2dhdr->dh_spa_guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10396 l2dhdr->dh_vdev_guid = dev->l2ad_vdev->vdev_guid; 10397 l2dhdr->dh_log_entries = dev->l2ad_log_entries; 10398 l2dhdr->dh_evict = dev->l2ad_evict; 10399 l2dhdr->dh_start = dev->l2ad_start; 10400 l2dhdr->dh_end = dev->l2ad_end; 10401 l2dhdr->dh_lb_asize = zfs_refcount_count(&dev->l2ad_lb_asize); 10402 l2dhdr->dh_lb_count = zfs_refcount_count(&dev->l2ad_lb_count); 10403 l2dhdr->dh_flags = 0; 10404 l2dhdr->dh_trim_action_time = dev->l2ad_vdev->vdev_trim_action_time; 10405 l2dhdr->dh_trim_state = dev->l2ad_vdev->vdev_trim_state; 10406 if (dev->l2ad_first) 10407 l2dhdr->dh_flags |= L2ARC_DEV_HDR_EVICT_FIRST; 10408 10409 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10410 10411 err = zio_wait(zio_write_phys(NULL, dev->l2ad_vdev, 10412 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, ZIO_CHECKSUM_LABEL, NULL, 10413 NULL, ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE)); 10414 10415 abd_free(abd); 10416 10417 if (err != 0) { 10418 zfs_dbgmsg("L2ARC IO error (%d) while writing device header, " 10419 "vdev guid: %llu", err, 10420 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10421 } 10422 } 10423 10424 /* 10425 * Commits a log block to the L2ARC device. This routine is invoked from 10426 * l2arc_write_buffers when the log block fills up. 10427 * This function allocates some memory to temporarily hold the serialized 10428 * buffer to be written. This is then released in l2arc_write_done. 10429 */ 10430 static uint64_t 10431 l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb) 10432 { 10433 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10434 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10435 uint64_t psize, asize; 10436 zio_t *wzio; 10437 l2arc_lb_abd_buf_t *abd_buf; 10438 uint8_t *tmpbuf = NULL; 10439 l2arc_lb_ptr_buf_t *lb_ptr_buf; 10440 10441 VERIFY3S(dev->l2ad_log_ent_idx, ==, dev->l2ad_log_entries); 10442 10443 abd_buf = zio_buf_alloc(sizeof (*abd_buf)); 10444 abd_buf->abd = abd_get_from_buf(lb, sizeof (*lb)); 10445 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 10446 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), KM_SLEEP); 10447 10448 /* link the buffer into the block chain */ 10449 lb->lb_prev_lbp = l2dhdr->dh_start_lbps[1]; 10450 lb->lb_magic = L2ARC_LOG_BLK_MAGIC; 10451 10452 /* 10453 * l2arc_log_blk_commit() may be called multiple times during a single 10454 * l2arc_write_buffers() call. Save the allocated abd buffers in a list 10455 * so we can free them in l2arc_write_done() later on. 10456 */ 10457 list_insert_tail(&cb->l2wcb_abd_list, abd_buf); 10458 10459 /* try to compress the buffer */ 10460 psize = zio_compress_data(ZIO_COMPRESS_LZ4, 10461 abd_buf->abd, (void **) &tmpbuf, sizeof (*lb), 0); 10462 10463 /* a log block is never entirely zero */ 10464 ASSERT(psize != 0); 10465 asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 10466 ASSERT(asize <= sizeof (*lb)); 10467 10468 /* 10469 * Update the start log block pointer in the device header to point 10470 * to the log block we're about to write. 10471 */ 10472 l2dhdr->dh_start_lbps[1] = l2dhdr->dh_start_lbps[0]; 10473 l2dhdr->dh_start_lbps[0].lbp_daddr = dev->l2ad_hand; 10474 l2dhdr->dh_start_lbps[0].lbp_payload_asize = 10475 dev->l2ad_log_blk_payload_asize; 10476 l2dhdr->dh_start_lbps[0].lbp_payload_start = 10477 dev->l2ad_log_blk_payload_start; 10478 L2BLK_SET_LSIZE( 10479 (&l2dhdr->dh_start_lbps[0])->lbp_prop, sizeof (*lb)); 10480 L2BLK_SET_PSIZE( 10481 (&l2dhdr->dh_start_lbps[0])->lbp_prop, asize); 10482 L2BLK_SET_CHECKSUM( 10483 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10484 ZIO_CHECKSUM_FLETCHER_4); 10485 if (asize < sizeof (*lb)) { 10486 /* compression succeeded */ 10487 memset(tmpbuf + psize, 0, asize - psize); 10488 L2BLK_SET_COMPRESS( 10489 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10490 ZIO_COMPRESS_LZ4); 10491 } else { 10492 /* compression failed */ 10493 memcpy(tmpbuf, lb, sizeof (*lb)); 10494 L2BLK_SET_COMPRESS( 10495 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10496 ZIO_COMPRESS_OFF); 10497 } 10498 10499 /* checksum what we're about to write */ 10500 fletcher_4_native(tmpbuf, asize, NULL, 10501 &l2dhdr->dh_start_lbps[0].lbp_cksum); 10502 10503 abd_free(abd_buf->abd); 10504 10505 /* perform the write itself */ 10506 abd_buf->abd = abd_get_from_buf(tmpbuf, sizeof (*lb)); 10507 abd_take_ownership_of_buf(abd_buf->abd, B_TRUE); 10508 wzio = zio_write_phys(pio, dev->l2ad_vdev, dev->l2ad_hand, 10509 asize, abd_buf->abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10510 ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE); 10511 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, zio_t *, wzio); 10512 (void) zio_nowait(wzio); 10513 10514 dev->l2ad_hand += asize; 10515 /* 10516 * Include the committed log block's pointer in the list of pointers 10517 * to log blocks present in the L2ARC device. 10518 */ 10519 memcpy(lb_ptr_buf->lb_ptr, &l2dhdr->dh_start_lbps[0], 10520 sizeof (l2arc_log_blkptr_t)); 10521 mutex_enter(&dev->l2ad_mtx); 10522 list_insert_head(&dev->l2ad_lbptr_list, lb_ptr_buf); 10523 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 10524 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 10525 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 10526 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 10527 mutex_exit(&dev->l2ad_mtx); 10528 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10529 10530 /* bump the kstats */ 10531 ARCSTAT_INCR(arcstat_l2_write_bytes, asize); 10532 ARCSTAT_BUMP(arcstat_l2_log_blk_writes); 10533 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, asize); 10534 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, 10535 dev->l2ad_log_blk_payload_asize / asize); 10536 10537 /* start a new log block */ 10538 dev->l2ad_log_ent_idx = 0; 10539 dev->l2ad_log_blk_payload_asize = 0; 10540 dev->l2ad_log_blk_payload_start = 0; 10541 10542 return (asize); 10543 } 10544 10545 /* 10546 * Validates an L2ARC log block address to make sure that it can be read 10547 * from the provided L2ARC device. 10548 */ 10549 boolean_t 10550 l2arc_log_blkptr_valid(l2arc_dev_t *dev, const l2arc_log_blkptr_t *lbp) 10551 { 10552 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10553 uint64_t asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10554 uint64_t end = lbp->lbp_daddr + asize - 1; 10555 uint64_t start = lbp->lbp_payload_start; 10556 boolean_t evicted = B_FALSE; 10557 10558 /* 10559 * A log block is valid if all of the following conditions are true: 10560 * - it fits entirely (including its payload) between l2ad_start and 10561 * l2ad_end 10562 * - it has a valid size 10563 * - neither the log block itself nor part of its payload was evicted 10564 * by l2arc_evict(): 10565 * 10566 * l2ad_hand l2ad_evict 10567 * | | lbp_daddr 10568 * | start | | end 10569 * | | | | | 10570 * V V V V V 10571 * l2ad_start ============================================ l2ad_end 10572 * --------------------------|||| 10573 * ^ ^ 10574 * | log block 10575 * payload 10576 */ 10577 10578 evicted = 10579 l2arc_range_check_overlap(start, end, dev->l2ad_hand) || 10580 l2arc_range_check_overlap(start, end, dev->l2ad_evict) || 10581 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, start) || 10582 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, end); 10583 10584 return (start >= dev->l2ad_start && end <= dev->l2ad_end && 10585 asize > 0 && asize <= sizeof (l2arc_log_blk_phys_t) && 10586 (!evicted || dev->l2ad_first)); 10587 } 10588 10589 /* 10590 * Inserts ARC buffer header `hdr' into the current L2ARC log block on 10591 * the device. The buffer being inserted must be present in L2ARC. 10592 * Returns B_TRUE if the L2ARC log block is full and needs to be committed 10593 * to L2ARC, or B_FALSE if it still has room for more ARC buffers. 10594 */ 10595 static boolean_t 10596 l2arc_log_blk_insert(l2arc_dev_t *dev, const arc_buf_hdr_t *hdr) 10597 { 10598 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10599 l2arc_log_ent_phys_t *le; 10600 10601 if (dev->l2ad_log_entries == 0) 10602 return (B_FALSE); 10603 10604 int index = dev->l2ad_log_ent_idx++; 10605 10606 ASSERT3S(index, <, dev->l2ad_log_entries); 10607 ASSERT(HDR_HAS_L2HDR(hdr)); 10608 10609 le = &lb->lb_entries[index]; 10610 memset(le, 0, sizeof (*le)); 10611 le->le_dva = hdr->b_dva; 10612 le->le_birth = hdr->b_birth; 10613 le->le_daddr = hdr->b_l2hdr.b_daddr; 10614 if (index == 0) 10615 dev->l2ad_log_blk_payload_start = le->le_daddr; 10616 L2BLK_SET_LSIZE((le)->le_prop, HDR_GET_LSIZE(hdr)); 10617 L2BLK_SET_PSIZE((le)->le_prop, HDR_GET_PSIZE(hdr)); 10618 L2BLK_SET_COMPRESS((le)->le_prop, HDR_GET_COMPRESS(hdr)); 10619 le->le_complevel = hdr->b_complevel; 10620 L2BLK_SET_TYPE((le)->le_prop, hdr->b_type); 10621 L2BLK_SET_PROTECTED((le)->le_prop, !!(HDR_PROTECTED(hdr))); 10622 L2BLK_SET_PREFETCH((le)->le_prop, !!(HDR_PREFETCH(hdr))); 10623 L2BLK_SET_STATE((le)->le_prop, hdr->b_l1hdr.b_state->arcs_state); 10624 10625 dev->l2ad_log_blk_payload_asize += vdev_psize_to_asize(dev->l2ad_vdev, 10626 HDR_GET_PSIZE(hdr)); 10627 10628 return (dev->l2ad_log_ent_idx == dev->l2ad_log_entries); 10629 } 10630 10631 /* 10632 * Checks whether a given L2ARC device address sits in a time-sequential 10633 * range. The trick here is that the L2ARC is a rotary buffer, so we can't 10634 * just do a range comparison, we need to handle the situation in which the 10635 * range wraps around the end of the L2ARC device. Arguments: 10636 * bottom -- Lower end of the range to check (written to earlier). 10637 * top -- Upper end of the range to check (written to later). 10638 * check -- The address for which we want to determine if it sits in 10639 * between the top and bottom. 10640 * 10641 * The 3-way conditional below represents the following cases: 10642 * 10643 * bottom < top : Sequentially ordered case: 10644 * <check>--------+-------------------+ 10645 * | (overlap here?) | 10646 * L2ARC dev V V 10647 * |---------------<bottom>============<top>--------------| 10648 * 10649 * bottom > top: Looped-around case: 10650 * <check>--------+------------------+ 10651 * | (overlap here?) | 10652 * L2ARC dev V V 10653 * |===============<top>---------------<bottom>===========| 10654 * ^ ^ 10655 * | (or here?) | 10656 * +---------------+---------<check> 10657 * 10658 * top == bottom : Just a single address comparison. 10659 */ 10660 boolean_t 10661 l2arc_range_check_overlap(uint64_t bottom, uint64_t top, uint64_t check) 10662 { 10663 if (bottom < top) 10664 return (bottom <= check && check <= top); 10665 else if (bottom > top) 10666 return (check <= top || bottom <= check); 10667 else 10668 return (check == top); 10669 } 10670 10671 EXPORT_SYMBOL(arc_buf_size); 10672 EXPORT_SYMBOL(arc_write); 10673 EXPORT_SYMBOL(arc_read); 10674 EXPORT_SYMBOL(arc_buf_info); 10675 EXPORT_SYMBOL(arc_getbuf_func); 10676 EXPORT_SYMBOL(arc_add_prune_callback); 10677 EXPORT_SYMBOL(arc_remove_prune_callback); 10678 10679 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_min, 10680 spl_param_get_u64, ZMOD_RW, "Minimum ARC size in bytes"); 10681 10682 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_max, 10683 spl_param_get_u64, ZMOD_RW, "Maximum ARC size in bytes"); 10684 10685 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_balance, UINT, ZMOD_RW, 10686 "Balance between metadata and data on ghost hits."); 10687 10688 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, grow_retry, param_set_arc_int, 10689 param_get_uint, ZMOD_RW, "Seconds before growing ARC size"); 10690 10691 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, shrink_shift, param_set_arc_int, 10692 param_get_uint, ZMOD_RW, "log2(fraction of ARC to reclaim)"); 10693 10694 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, pc_percent, UINT, ZMOD_RW, 10695 "Percent of pagecache to reclaim ARC to"); 10696 10697 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, average_blocksize, UINT, ZMOD_RD, 10698 "Target average block size"); 10699 10700 ZFS_MODULE_PARAM(zfs, zfs_, compressed_arc_enabled, INT, ZMOD_RW, 10701 "Disable compressed ARC buffers"); 10702 10703 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prefetch_ms, param_set_arc_int, 10704 param_get_uint, ZMOD_RW, "Min life of prefetch block in ms"); 10705 10706 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prescient_prefetch_ms, 10707 param_set_arc_int, param_get_uint, ZMOD_RW, 10708 "Min life of prescient prefetched block in ms"); 10709 10710 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, U64, ZMOD_RW, 10711 "Max write bytes per interval"); 10712 10713 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, U64, ZMOD_RW, 10714 "Extra write bytes during device warmup"); 10715 10716 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, U64, ZMOD_RW, 10717 "Number of max device writes to precache"); 10718 10719 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, U64, ZMOD_RW, 10720 "Compressed l2arc_headroom multiplier"); 10721 10722 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, U64, ZMOD_RW, 10723 "TRIM ahead L2ARC write size multiplier"); 10724 10725 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, U64, ZMOD_RW, 10726 "Seconds between L2ARC writing"); 10727 10728 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, U64, ZMOD_RW, 10729 "Min feed interval in milliseconds"); 10730 10731 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW, 10732 "Skip caching prefetched buffers"); 10733 10734 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_again, INT, ZMOD_RW, 10735 "Turbo L2ARC warmup"); 10736 10737 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, norw, INT, ZMOD_RW, 10738 "No reads during writes"); 10739 10740 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, meta_percent, UINT, ZMOD_RW, 10741 "Percent of ARC size allowed for L2ARC-only headers"); 10742 10743 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW, 10744 "Rebuild the L2ARC when importing a pool"); 10745 10746 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, U64, ZMOD_RW, 10747 "Min size in bytes to write rebuild log blocks in L2ARC"); 10748 10749 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW, 10750 "Cache only MFU data from ARC into L2ARC"); 10751 10752 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, exclude_special, INT, ZMOD_RW, 10753 "Exclude dbufs on special vdevs from being cached to L2ARC if set."); 10754 10755 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int, 10756 param_get_uint, ZMOD_RW, "System free memory I/O throttle in bytes"); 10757 10758 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_u64, 10759 spl_param_get_u64, ZMOD_RW, "System free memory target size in bytes"); 10760 10761 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_u64, 10762 spl_param_get_u64, ZMOD_RW, "Minimum bytes of dnodes in ARC"); 10763 10764 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent, 10765 param_set_arc_int, param_get_uint, ZMOD_RW, 10766 "Percent of ARC meta buffers for dnodes"); 10767 10768 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, UINT, ZMOD_RW, 10769 "Percentage of excess dnodes to try to unpin"); 10770 10771 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, UINT, ZMOD_RW, 10772 "When full, ARC allocation waits for eviction of this % of alloc size"); 10773 10774 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, evict_batch_limit, UINT, ZMOD_RW, 10775 "The number of headers to evict per sublist before moving to the next"); 10776 10777 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, prune_task_threads, INT, ZMOD_RW, 10778 "Number of arc_prune threads"); 10779