1 // SPDX-License-Identifier: GPL-2.0-or-later 2 /* 3 * Budget Fair Queueing (BFQ) I/O scheduler. 4 * 5 * Based on ideas and code from CFQ: 6 * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk> 7 * 8 * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it> 9 * Paolo Valente <paolo.valente@unimore.it> 10 * 11 * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it> 12 * Arianna Avanzini <avanzini@google.com> 13 * 14 * Copyright (C) 2017 Paolo Valente <paolo.valente@linaro.org> 15 * 16 * BFQ is a proportional-share I/O scheduler, with some extra 17 * low-latency capabilities. BFQ also supports full hierarchical 18 * scheduling through cgroups. Next paragraphs provide an introduction 19 * on BFQ inner workings. Details on BFQ benefits, usage and 20 * limitations can be found in Documentation/block/bfq-iosched.rst. 21 * 22 * BFQ is a proportional-share storage-I/O scheduling algorithm based 23 * on the slice-by-slice service scheme of CFQ. But BFQ assigns 24 * budgets, measured in number of sectors, to processes instead of 25 * time slices. The device is not granted to the in-service process 26 * for a given time slice, but until it has exhausted its assigned 27 * budget. This change from the time to the service domain enables BFQ 28 * to distribute the device throughput among processes as desired, 29 * without any distortion due to throughput fluctuations, or to device 30 * internal queueing. BFQ uses an ad hoc internal scheduler, called 31 * B-WF2Q+, to schedule processes according to their budgets. More 32 * precisely, BFQ schedules queues associated with processes. Each 33 * process/queue is assigned a user-configurable weight, and B-WF2Q+ 34 * guarantees that each queue receives a fraction of the throughput 35 * proportional to its weight. Thanks to the accurate policy of 36 * B-WF2Q+, BFQ can afford to assign high budgets to I/O-bound 37 * processes issuing sequential requests (to boost the throughput), 38 * and yet guarantee a low latency to interactive and soft real-time 39 * applications. 40 * 41 * In particular, to provide these low-latency guarantees, BFQ 42 * explicitly privileges the I/O of two classes of time-sensitive 43 * applications: interactive and soft real-time. In more detail, BFQ 44 * behaves this way if the low_latency parameter is set (default 45 * configuration). This feature enables BFQ to provide applications in 46 * these classes with a very low latency. 47 * 48 * To implement this feature, BFQ constantly tries to detect whether 49 * the I/O requests in a bfq_queue come from an interactive or a soft 50 * real-time application. For brevity, in these cases, the queue is 51 * said to be interactive or soft real-time. In both cases, BFQ 52 * privileges the service of the queue, over that of non-interactive 53 * and non-soft-real-time queues. This privileging is performed, 54 * mainly, by raising the weight of the queue. So, for brevity, we 55 * call just weight-raising periods the time periods during which a 56 * queue is privileged, because deemed interactive or soft real-time. 57 * 58 * The detection of soft real-time queues/applications is described in 59 * detail in the comments on the function 60 * bfq_bfqq_softrt_next_start. On the other hand, the detection of an 61 * interactive queue works as follows: a queue is deemed interactive 62 * if it is constantly non empty only for a limited time interval, 63 * after which it does become empty. The queue may be deemed 64 * interactive again (for a limited time), if it restarts being 65 * constantly non empty, provided that this happens only after the 66 * queue has remained empty for a given minimum idle time. 67 * 68 * By default, BFQ computes automatically the above maximum time 69 * interval, i.e., the time interval after which a constantly 70 * non-empty queue stops being deemed interactive. Since a queue is 71 * weight-raised while it is deemed interactive, this maximum time 72 * interval happens to coincide with the (maximum) duration of the 73 * weight-raising for interactive queues. 74 * 75 * Finally, BFQ also features additional heuristics for 76 * preserving both a low latency and a high throughput on NCQ-capable, 77 * rotational or flash-based devices, and to get the job done quickly 78 * for applications consisting in many I/O-bound processes. 79 * 80 * NOTE: if the main or only goal, with a given device, is to achieve 81 * the maximum-possible throughput at all times, then do switch off 82 * all low-latency heuristics for that device, by setting low_latency 83 * to 0. 84 * 85 * BFQ is described in [1], where also a reference to the initial, 86 * more theoretical paper on BFQ can be found. The interested reader 87 * can find in the latter paper full details on the main algorithm, as 88 * well as formulas of the guarantees and formal proofs of all the 89 * properties. With respect to the version of BFQ presented in these 90 * papers, this implementation adds a few more heuristics, such as the 91 * ones that guarantee a low latency to interactive and soft real-time 92 * applications, and a hierarchical extension based on H-WF2Q+. 93 * 94 * B-WF2Q+ is based on WF2Q+, which is described in [2], together with 95 * H-WF2Q+, while the augmented tree used here to implement B-WF2Q+ 96 * with O(log N) complexity derives from the one introduced with EEVDF 97 * in [3]. 98 * 99 * [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O 100 * Scheduler", Proceedings of the First Workshop on Mobile System 101 * Technologies (MST-2015), May 2015. 102 * http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf 103 * 104 * [2] Jon C.R. Bennett and H. Zhang, "Hierarchical Packet Fair Queueing 105 * Algorithms", IEEE/ACM Transactions on Networking, 5(5):675-689, 106 * Oct 1997. 107 * 108 * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz 109 * 110 * [3] I. Stoica and H. Abdel-Wahab, "Earliest Eligible Virtual Deadline 111 * First: A Flexible and Accurate Mechanism for Proportional Share 112 * Resource Allocation", technical report. 113 * 114 * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf 115 */ 116 #include <linux/module.h> 117 #include <linux/slab.h> 118 #include <linux/blkdev.h> 119 #include <linux/cgroup.h> 120 #include <linux/ktime.h> 121 #include <linux/rbtree.h> 122 #include <linux/ioprio.h> 123 #include <linux/sbitmap.h> 124 #include <linux/delay.h> 125 #include <linux/backing-dev.h> 126 127 #include <trace/events/block.h> 128 129 #include "elevator.h" 130 #include "blk.h" 131 #include "blk-mq.h" 132 #include "blk-mq-tag.h" 133 #include "blk-mq-sched.h" 134 #include "bfq-iosched.h" 135 #include "blk-wbt.h" 136 137 #define BFQ_BFQQ_FNS(name) \ 138 void bfq_mark_bfqq_##name(struct bfq_queue *bfqq) \ 139 { \ 140 __set_bit(BFQQF_##name, &(bfqq)->flags); \ 141 } \ 142 void bfq_clear_bfqq_##name(struct bfq_queue *bfqq) \ 143 { \ 144 __clear_bit(BFQQF_##name, &(bfqq)->flags); \ 145 } \ 146 int bfq_bfqq_##name(const struct bfq_queue *bfqq) \ 147 { \ 148 return test_bit(BFQQF_##name, &(bfqq)->flags); \ 149 } 150 151 BFQ_BFQQ_FNS(just_created); 152 BFQ_BFQQ_FNS(busy); 153 BFQ_BFQQ_FNS(wait_request); 154 BFQ_BFQQ_FNS(non_blocking_wait_rq); 155 BFQ_BFQQ_FNS(fifo_expire); 156 BFQ_BFQQ_FNS(has_short_ttime); 157 BFQ_BFQQ_FNS(sync); 158 BFQ_BFQQ_FNS(IO_bound); 159 BFQ_BFQQ_FNS(in_large_burst); 160 BFQ_BFQQ_FNS(coop); 161 BFQ_BFQQ_FNS(split_coop); 162 BFQ_BFQQ_FNS(softrt_update); 163 #undef BFQ_BFQQ_FNS \ 164 165 /* Expiration time of async (0) and sync (1) requests, in ns. */ 166 static const u64 bfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 }; 167 168 /* Maximum backwards seek (magic number lifted from CFQ), in KiB. */ 169 static const int bfq_back_max = 16 * 1024; 170 171 /* Penalty of a backwards seek, in number of sectors. */ 172 static const int bfq_back_penalty = 2; 173 174 /* Idling period duration, in ns. */ 175 static u64 bfq_slice_idle = NSEC_PER_SEC / 125; 176 177 /* Minimum number of assigned budgets for which stats are safe to compute. */ 178 static const int bfq_stats_min_budgets = 194; 179 180 /* Default maximum budget values, in sectors and number of requests. */ 181 static const int bfq_default_max_budget = 16 * 1024; 182 183 /* 184 * When a sync request is dispatched, the queue that contains that 185 * request, and all the ancestor entities of that queue, are charged 186 * with the number of sectors of the request. In contrast, if the 187 * request is async, then the queue and its ancestor entities are 188 * charged with the number of sectors of the request, multiplied by 189 * the factor below. This throttles the bandwidth for async I/O, 190 * w.r.t. to sync I/O, and it is done to counter the tendency of async 191 * writes to steal I/O throughput to reads. 192 * 193 * The current value of this parameter is the result of a tuning with 194 * several hardware and software configurations. We tried to find the 195 * lowest value for which writes do not cause noticeable problems to 196 * reads. In fact, the lower this parameter, the stabler I/O control, 197 * in the following respect. The lower this parameter is, the less 198 * the bandwidth enjoyed by a group decreases 199 * - when the group does writes, w.r.t. to when it does reads; 200 * - when other groups do reads, w.r.t. to when they do writes. 201 */ 202 static const int bfq_async_charge_factor = 3; 203 204 /* Default timeout values, in jiffies, approximating CFQ defaults. */ 205 const int bfq_timeout = HZ / 8; 206 207 /* 208 * Time limit for merging (see comments in bfq_setup_cooperator). Set 209 * to the slowest value that, in our tests, proved to be effective in 210 * removing false positives, while not causing true positives to miss 211 * queue merging. 212 * 213 * As can be deduced from the low time limit below, queue merging, if 214 * successful, happens at the very beginning of the I/O of the involved 215 * cooperating processes, as a consequence of the arrival of the very 216 * first requests from each cooperator. After that, there is very 217 * little chance to find cooperators. 218 */ 219 static const unsigned long bfq_merge_time_limit = HZ/10; 220 221 static struct kmem_cache *bfq_pool; 222 223 /* Below this threshold (in ns), we consider thinktime immediate. */ 224 #define BFQ_MIN_TT (2 * NSEC_PER_MSEC) 225 226 /* hw_tag detection: parallel requests threshold and min samples needed. */ 227 #define BFQ_HW_QUEUE_THRESHOLD 3 228 #define BFQ_HW_QUEUE_SAMPLES 32 229 230 #define BFQQ_SEEK_THR (sector_t)(8 * 100) 231 #define BFQQ_SECT_THR_NONROT (sector_t)(2 * 32) 232 #define BFQ_RQ_SEEKY(bfqd, last_pos, rq) \ 233 (get_sdist(last_pos, rq) > \ 234 BFQQ_SEEK_THR && \ 235 (!blk_queue_nonrot(bfqd->queue) || \ 236 blk_rq_sectors(rq) < BFQQ_SECT_THR_NONROT)) 237 #define BFQQ_CLOSE_THR (sector_t)(8 * 1024) 238 #define BFQQ_SEEKY(bfqq) (hweight32(bfqq->seek_history) > 19) 239 /* 240 * Sync random I/O is likely to be confused with soft real-time I/O, 241 * because it is characterized by limited throughput and apparently 242 * isochronous arrival pattern. To avoid false positives, queues 243 * containing only random (seeky) I/O are prevented from being tagged 244 * as soft real-time. 245 */ 246 #define BFQQ_TOTALLY_SEEKY(bfqq) (bfqq->seek_history == -1) 247 248 /* Min number of samples required to perform peak-rate update */ 249 #define BFQ_RATE_MIN_SAMPLES 32 250 /* Min observation time interval required to perform a peak-rate update (ns) */ 251 #define BFQ_RATE_MIN_INTERVAL (300*NSEC_PER_MSEC) 252 /* Target observation time interval for a peak-rate update (ns) */ 253 #define BFQ_RATE_REF_INTERVAL NSEC_PER_SEC 254 255 /* 256 * Shift used for peak-rate fixed precision calculations. 257 * With 258 * - the current shift: 16 positions 259 * - the current type used to store rate: u32 260 * - the current unit of measure for rate: [sectors/usec], or, more precisely, 261 * [(sectors/usec) / 2^BFQ_RATE_SHIFT] to take into account the shift, 262 * the range of rates that can be stored is 263 * [1 / 2^BFQ_RATE_SHIFT, 2^(32 - BFQ_RATE_SHIFT)] sectors/usec = 264 * [1 / 2^16, 2^16] sectors/usec = [15e-6, 65536] sectors/usec = 265 * [15, 65G] sectors/sec 266 * Which, assuming a sector size of 512B, corresponds to a range of 267 * [7.5K, 33T] B/sec 268 */ 269 #define BFQ_RATE_SHIFT 16 270 271 /* 272 * When configured for computing the duration of the weight-raising 273 * for interactive queues automatically (see the comments at the 274 * beginning of this file), BFQ does it using the following formula: 275 * duration = (ref_rate / r) * ref_wr_duration, 276 * where r is the peak rate of the device, and ref_rate and 277 * ref_wr_duration are two reference parameters. In particular, 278 * ref_rate is the peak rate of the reference storage device (see 279 * below), and ref_wr_duration is about the maximum time needed, with 280 * BFQ and while reading two files in parallel, to load typical large 281 * applications on the reference device (see the comments on 282 * max_service_from_wr below, for more details on how ref_wr_duration 283 * is obtained). In practice, the slower/faster the device at hand 284 * is, the more/less it takes to load applications with respect to the 285 * reference device. Accordingly, the longer/shorter BFQ grants 286 * weight raising to interactive applications. 287 * 288 * BFQ uses two different reference pairs (ref_rate, ref_wr_duration), 289 * depending on whether the device is rotational or non-rotational. 290 * 291 * In the following definitions, ref_rate[0] and ref_wr_duration[0] 292 * are the reference values for a rotational device, whereas 293 * ref_rate[1] and ref_wr_duration[1] are the reference values for a 294 * non-rotational device. The reference rates are not the actual peak 295 * rates of the devices used as a reference, but slightly lower 296 * values. The reason for using slightly lower values is that the 297 * peak-rate estimator tends to yield slightly lower values than the 298 * actual peak rate (it can yield the actual peak rate only if there 299 * is only one process doing I/O, and the process does sequential 300 * I/O). 301 * 302 * The reference peak rates are measured in sectors/usec, left-shifted 303 * by BFQ_RATE_SHIFT. 304 */ 305 static int ref_rate[2] = {14000, 33000}; 306 /* 307 * To improve readability, a conversion function is used to initialize 308 * the following array, which entails that the array can be 309 * initialized only in a function. 310 */ 311 static int ref_wr_duration[2]; 312 313 /* 314 * BFQ uses the above-detailed, time-based weight-raising mechanism to 315 * privilege interactive tasks. This mechanism is vulnerable to the 316 * following false positives: I/O-bound applications that will go on 317 * doing I/O for much longer than the duration of weight 318 * raising. These applications have basically no benefit from being 319 * weight-raised at the beginning of their I/O. On the opposite end, 320 * while being weight-raised, these applications 321 * a) unjustly steal throughput to applications that may actually need 322 * low latency; 323 * b) make BFQ uselessly perform device idling; device idling results 324 * in loss of device throughput with most flash-based storage, and may 325 * increase latencies when used purposelessly. 326 * 327 * BFQ tries to reduce these problems, by adopting the following 328 * countermeasure. To introduce this countermeasure, we need first to 329 * finish explaining how the duration of weight-raising for 330 * interactive tasks is computed. 331 * 332 * For a bfq_queue deemed as interactive, the duration of weight 333 * raising is dynamically adjusted, as a function of the estimated 334 * peak rate of the device, so as to be equal to the time needed to 335 * execute the 'largest' interactive task we benchmarked so far. By 336 * largest task, we mean the task for which each involved process has 337 * to do more I/O than for any of the other tasks we benchmarked. This 338 * reference interactive task is the start-up of LibreOffice Writer, 339 * and in this task each process/bfq_queue needs to have at most ~110K 340 * sectors transferred. 341 * 342 * This last piece of information enables BFQ to reduce the actual 343 * duration of weight-raising for at least one class of I/O-bound 344 * applications: those doing sequential or quasi-sequential I/O. An 345 * example is file copy. In fact, once started, the main I/O-bound 346 * processes of these applications usually consume the above 110K 347 * sectors in much less time than the processes of an application that 348 * is starting, because these I/O-bound processes will greedily devote 349 * almost all their CPU cycles only to their target, 350 * throughput-friendly I/O operations. This is even more true if BFQ 351 * happens to be underestimating the device peak rate, and thus 352 * overestimating the duration of weight raising. But, according to 353 * our measurements, once transferred 110K sectors, these processes 354 * have no right to be weight-raised any longer. 355 * 356 * Basing on the last consideration, BFQ ends weight-raising for a 357 * bfq_queue if the latter happens to have received an amount of 358 * service at least equal to the following constant. The constant is 359 * set to slightly more than 110K, to have a minimum safety margin. 360 * 361 * This early ending of weight-raising reduces the amount of time 362 * during which interactive false positives cause the two problems 363 * described at the beginning of these comments. 364 */ 365 static const unsigned long max_service_from_wr = 120000; 366 367 /* 368 * Maximum time between the creation of two queues, for stable merge 369 * to be activated (in ms) 370 */ 371 static const unsigned long bfq_activation_stable_merging = 600; 372 /* 373 * Minimum time to be waited before evaluating delayed stable merge (in ms) 374 */ 375 static const unsigned long bfq_late_stable_merging = 600; 376 377 #define RQ_BIC(rq) ((struct bfq_io_cq *)((rq)->elv.priv[0])) 378 #define RQ_BFQQ(rq) ((rq)->elv.priv[1]) 379 380 struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync, 381 unsigned int actuator_idx) 382 { 383 if (is_sync) 384 return bic->bfqq[1][actuator_idx]; 385 386 return bic->bfqq[0][actuator_idx]; 387 } 388 389 static void bfq_put_stable_ref(struct bfq_queue *bfqq); 390 391 void bic_set_bfqq(struct bfq_io_cq *bic, 392 struct bfq_queue *bfqq, 393 bool is_sync, 394 unsigned int actuator_idx) 395 { 396 struct bfq_queue *old_bfqq = bic->bfqq[is_sync][actuator_idx]; 397 398 /* 399 * If bfqq != NULL, then a non-stable queue merge between 400 * bic->bfqq and bfqq is happening here. This causes troubles 401 * in the following case: bic->bfqq has also been scheduled 402 * for a possible stable merge with bic->stable_merge_bfqq, 403 * and bic->stable_merge_bfqq == bfqq happens to 404 * hold. Troubles occur because bfqq may then undergo a split, 405 * thereby becoming eligible for a stable merge. Yet, if 406 * bic->stable_merge_bfqq points exactly to bfqq, then bfqq 407 * would be stably merged with itself. To avoid this anomaly, 408 * we cancel the stable merge if 409 * bic->stable_merge_bfqq == bfqq. 410 */ 411 struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[actuator_idx]; 412 413 /* Clear bic pointer if bfqq is detached from this bic */ 414 if (old_bfqq && old_bfqq->bic == bic) 415 old_bfqq->bic = NULL; 416 417 if (is_sync) 418 bic->bfqq[1][actuator_idx] = bfqq; 419 else 420 bic->bfqq[0][actuator_idx] = bfqq; 421 422 if (bfqq && bfqq_data->stable_merge_bfqq == bfqq) { 423 /* 424 * Actually, these same instructions are executed also 425 * in bfq_setup_cooperator, in case of abort or actual 426 * execution of a stable merge. We could avoid 427 * repeating these instructions there too, but if we 428 * did so, we would nest even more complexity in this 429 * function. 430 */ 431 bfq_put_stable_ref(bfqq_data->stable_merge_bfqq); 432 433 bfqq_data->stable_merge_bfqq = NULL; 434 } 435 } 436 437 struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic) 438 { 439 return bic->icq.q->elevator->elevator_data; 440 } 441 442 /** 443 * icq_to_bic - convert iocontext queue structure to bfq_io_cq. 444 * @icq: the iocontext queue. 445 */ 446 static struct bfq_io_cq *icq_to_bic(struct io_cq *icq) 447 { 448 /* bic->icq is the first member, %NULL will convert to %NULL */ 449 return container_of(icq, struct bfq_io_cq, icq); 450 } 451 452 /** 453 * bfq_bic_lookup - search into @ioc a bic associated to @bfqd. 454 * @q: the request queue. 455 */ 456 static struct bfq_io_cq *bfq_bic_lookup(struct request_queue *q) 457 { 458 struct bfq_io_cq *icq; 459 unsigned long flags; 460 461 if (!current->io_context) 462 return NULL; 463 464 spin_lock_irqsave(&q->queue_lock, flags); 465 icq = icq_to_bic(ioc_lookup_icq(q)); 466 spin_unlock_irqrestore(&q->queue_lock, flags); 467 468 return icq; 469 } 470 471 /* 472 * Scheduler run of queue, if there are requests pending and no one in the 473 * driver that will restart queueing. 474 */ 475 void bfq_schedule_dispatch(struct bfq_data *bfqd) 476 { 477 lockdep_assert_held(&bfqd->lock); 478 479 if (bfqd->queued != 0) { 480 bfq_log(bfqd, "schedule dispatch"); 481 blk_mq_run_hw_queues(bfqd->queue, true); 482 } 483 } 484 485 #define bfq_class_idle(bfqq) ((bfqq)->ioprio_class == IOPRIO_CLASS_IDLE) 486 487 #define bfq_sample_valid(samples) ((samples) > 80) 488 489 /* 490 * Lifted from AS - choose which of rq1 and rq2 that is best served now. 491 * We choose the request that is closer to the head right now. Distance 492 * behind the head is penalized and only allowed to a certain extent. 493 */ 494 static struct request *bfq_choose_req(struct bfq_data *bfqd, 495 struct request *rq1, 496 struct request *rq2, 497 sector_t last) 498 { 499 sector_t s1, s2, d1 = 0, d2 = 0; 500 unsigned long back_max; 501 #define BFQ_RQ1_WRAP 0x01 /* request 1 wraps */ 502 #define BFQ_RQ2_WRAP 0x02 /* request 2 wraps */ 503 unsigned int wrap = 0; /* bit mask: requests behind the disk head? */ 504 505 if (!rq1 || rq1 == rq2) 506 return rq2; 507 if (!rq2) 508 return rq1; 509 510 if (rq_is_sync(rq1) && !rq_is_sync(rq2)) 511 return rq1; 512 else if (rq_is_sync(rq2) && !rq_is_sync(rq1)) 513 return rq2; 514 if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META)) 515 return rq1; 516 else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META)) 517 return rq2; 518 519 s1 = blk_rq_pos(rq1); 520 s2 = blk_rq_pos(rq2); 521 522 /* 523 * By definition, 1KiB is 2 sectors. 524 */ 525 back_max = bfqd->bfq_back_max * 2; 526 527 /* 528 * Strict one way elevator _except_ in the case where we allow 529 * short backward seeks which are biased as twice the cost of a 530 * similar forward seek. 531 */ 532 if (s1 >= last) 533 d1 = s1 - last; 534 else if (s1 + back_max >= last) 535 d1 = (last - s1) * bfqd->bfq_back_penalty; 536 else 537 wrap |= BFQ_RQ1_WRAP; 538 539 if (s2 >= last) 540 d2 = s2 - last; 541 else if (s2 + back_max >= last) 542 d2 = (last - s2) * bfqd->bfq_back_penalty; 543 else 544 wrap |= BFQ_RQ2_WRAP; 545 546 /* Found required data */ 547 548 /* 549 * By doing switch() on the bit mask "wrap" we avoid having to 550 * check two variables for all permutations: --> faster! 551 */ 552 switch (wrap) { 553 case 0: /* common case for CFQ: rq1 and rq2 not wrapped */ 554 if (d1 < d2) 555 return rq1; 556 else if (d2 < d1) 557 return rq2; 558 559 if (s1 >= s2) 560 return rq1; 561 else 562 return rq2; 563 564 case BFQ_RQ2_WRAP: 565 return rq1; 566 case BFQ_RQ1_WRAP: 567 return rq2; 568 case BFQ_RQ1_WRAP|BFQ_RQ2_WRAP: /* both rqs wrapped */ 569 default: 570 /* 571 * Since both rqs are wrapped, 572 * start with the one that's further behind head 573 * (--> only *one* back seek required), 574 * since back seek takes more time than forward. 575 */ 576 if (s1 <= s2) 577 return rq1; 578 else 579 return rq2; 580 } 581 } 582 583 #define BFQ_LIMIT_INLINE_DEPTH 16 584 585 #ifdef CONFIG_BFQ_GROUP_IOSCHED 586 static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) 587 { 588 struct bfq_data *bfqd = bfqq->bfqd; 589 struct bfq_entity *entity = &bfqq->entity; 590 struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH]; 591 struct bfq_entity **entities = inline_entities; 592 int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH; 593 int class_idx = bfqq->ioprio_class - 1; 594 struct bfq_sched_data *sched_data; 595 unsigned long wsum; 596 bool ret = false; 597 598 if (!entity->on_st_or_in_serv) 599 return false; 600 601 retry: 602 spin_lock_irq(&bfqd->lock); 603 /* +1 for bfqq entity, root cgroup not included */ 604 depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1; 605 if (depth > alloc_depth) { 606 spin_unlock_irq(&bfqd->lock); 607 if (entities != inline_entities) 608 kfree(entities); 609 entities = kmalloc_array(depth, sizeof(*entities), GFP_NOIO); 610 if (!entities) 611 return false; 612 alloc_depth = depth; 613 goto retry; 614 } 615 616 sched_data = entity->sched_data; 617 /* Gather our ancestors as we need to traverse them in reverse order */ 618 level = 0; 619 for_each_entity(entity) { 620 /* 621 * If at some level entity is not even active, allow request 622 * queueing so that BFQ knows there's work to do and activate 623 * entities. 624 */ 625 if (!entity->on_st_or_in_serv) 626 goto out; 627 /* Uh, more parents than cgroup subsystem thinks? */ 628 if (WARN_ON_ONCE(level >= depth)) 629 break; 630 entities[level++] = entity; 631 } 632 WARN_ON_ONCE(level != depth); 633 for (level--; level >= 0; level--) { 634 entity = entities[level]; 635 if (level > 0) { 636 wsum = bfq_entity_service_tree(entity)->wsum; 637 } else { 638 int i; 639 /* 640 * For bfqq itself we take into account service trees 641 * of all higher priority classes and multiply their 642 * weights so that low prio queue from higher class 643 * gets more requests than high prio queue from lower 644 * class. 645 */ 646 wsum = 0; 647 for (i = 0; i <= class_idx; i++) { 648 wsum = wsum * IOPRIO_BE_NR + 649 sched_data->service_tree[i].wsum; 650 } 651 } 652 limit = DIV_ROUND_CLOSEST(limit * entity->weight, wsum); 653 if (entity->allocated >= limit) { 654 bfq_log_bfqq(bfqq->bfqd, bfqq, 655 "too many requests: allocated %d limit %d level %d", 656 entity->allocated, limit, level); 657 ret = true; 658 break; 659 } 660 } 661 out: 662 spin_unlock_irq(&bfqd->lock); 663 if (entities != inline_entities) 664 kfree(entities); 665 return ret; 666 } 667 #else 668 static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) 669 { 670 return false; 671 } 672 #endif 673 674 /* 675 * Async I/O can easily starve sync I/O (both sync reads and sync 676 * writes), by consuming all tags. Similarly, storms of sync writes, 677 * such as those that sync(2) may trigger, can starve sync reads. 678 * Limit depths of async I/O and sync writes so as to counter both 679 * problems. 680 * 681 * Also if a bfq queue or its parent cgroup consume more tags than would be 682 * appropriate for their weight, we trim the available tag depth to 1. This 683 * avoids a situation where one cgroup can starve another cgroup from tags and 684 * thus block service differentiation among cgroups. Note that because the 685 * queue / cgroup already has many requests allocated and queued, this does not 686 * significantly affect service guarantees coming from the BFQ scheduling 687 * algorithm. 688 */ 689 static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data) 690 { 691 struct bfq_data *bfqd = data->q->elevator->elevator_data; 692 struct bfq_io_cq *bic = bfq_bic_lookup(data->q); 693 int depth; 694 unsigned limit = data->q->nr_requests; 695 unsigned int act_idx; 696 697 /* Sync reads have full depth available */ 698 if (op_is_sync(opf) && !op_is_write(opf)) { 699 depth = 0; 700 } else { 701 depth = bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)]; 702 limit = (limit * depth) >> bfqd->full_depth_shift; 703 } 704 705 for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) { 706 struct bfq_queue *bfqq = 707 bic_to_bfqq(bic, op_is_sync(opf), act_idx); 708 709 /* 710 * Does queue (or any parent entity) exceed number of 711 * requests that should be available to it? Heavily 712 * limit depth so that it cannot consume more 713 * available requests and thus starve other entities. 714 */ 715 if (bfqq && bfqq_request_over_limit(bfqq, limit)) { 716 depth = 1; 717 break; 718 } 719 } 720 bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u", 721 __func__, bfqd->wr_busy_queues, op_is_sync(opf), depth); 722 if (depth) 723 data->shallow_depth = depth; 724 } 725 726 static struct bfq_queue * 727 bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root, 728 sector_t sector, struct rb_node **ret_parent, 729 struct rb_node ***rb_link) 730 { 731 struct rb_node **p, *parent; 732 struct bfq_queue *bfqq = NULL; 733 734 parent = NULL; 735 p = &root->rb_node; 736 while (*p) { 737 struct rb_node **n; 738 739 parent = *p; 740 bfqq = rb_entry(parent, struct bfq_queue, pos_node); 741 742 /* 743 * Sort strictly based on sector. Smallest to the left, 744 * largest to the right. 745 */ 746 if (sector > blk_rq_pos(bfqq->next_rq)) 747 n = &(*p)->rb_right; 748 else if (sector < blk_rq_pos(bfqq->next_rq)) 749 n = &(*p)->rb_left; 750 else 751 break; 752 p = n; 753 bfqq = NULL; 754 } 755 756 *ret_parent = parent; 757 if (rb_link) 758 *rb_link = p; 759 760 bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d", 761 (unsigned long long)sector, 762 bfqq ? bfqq->pid : 0); 763 764 return bfqq; 765 } 766 767 static bool bfq_too_late_for_merging(struct bfq_queue *bfqq) 768 { 769 return bfqq->service_from_backlogged > 0 && 770 time_is_before_jiffies(bfqq->first_IO_time + 771 bfq_merge_time_limit); 772 } 773 774 /* 775 * The following function is not marked as __cold because it is 776 * actually cold, but for the same performance goal described in the 777 * comments on the likely() at the beginning of 778 * bfq_setup_cooperator(). Unexpectedly, to reach an even lower 779 * execution time for the case where this function is not invoked, we 780 * had to add an unlikely() in each involved if(). 781 */ 782 void __cold 783 bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) 784 { 785 struct rb_node **p, *parent; 786 struct bfq_queue *__bfqq; 787 788 if (bfqq->pos_root) { 789 rb_erase(&bfqq->pos_node, bfqq->pos_root); 790 bfqq->pos_root = NULL; 791 } 792 793 /* oom_bfqq does not participate in queue merging */ 794 if (bfqq == &bfqd->oom_bfqq) 795 return; 796 797 /* 798 * bfqq cannot be merged any longer (see comments in 799 * bfq_setup_cooperator): no point in adding bfqq into the 800 * position tree. 801 */ 802 if (bfq_too_late_for_merging(bfqq)) 803 return; 804 805 if (bfq_class_idle(bfqq)) 806 return; 807 if (!bfqq->next_rq) 808 return; 809 810 bfqq->pos_root = &bfqq_group(bfqq)->rq_pos_tree; 811 __bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root, 812 blk_rq_pos(bfqq->next_rq), &parent, &p); 813 if (!__bfqq) { 814 rb_link_node(&bfqq->pos_node, parent, p); 815 rb_insert_color(&bfqq->pos_node, bfqq->pos_root); 816 } else 817 bfqq->pos_root = NULL; 818 } 819 820 /* 821 * The following function returns false either if every active queue 822 * must receive the same share of the throughput (symmetric scenario), 823 * or, as a special case, if bfqq must receive a share of the 824 * throughput lower than or equal to the share that every other active 825 * queue must receive. If bfqq does sync I/O, then these are the only 826 * two cases where bfqq happens to be guaranteed its share of the 827 * throughput even if I/O dispatching is not plugged when bfqq remains 828 * temporarily empty (for more details, see the comments in the 829 * function bfq_better_to_idle()). For this reason, the return value 830 * of this function is used to check whether I/O-dispatch plugging can 831 * be avoided. 832 * 833 * The above first case (symmetric scenario) occurs when: 834 * 1) all active queues have the same weight, 835 * 2) all active queues belong to the same I/O-priority class, 836 * 3) all active groups at the same level in the groups tree have the same 837 * weight, 838 * 4) all active groups at the same level in the groups tree have the same 839 * number of children. 840 * 841 * Unfortunately, keeping the necessary state for evaluating exactly 842 * the last two symmetry sub-conditions above would be quite complex 843 * and time consuming. Therefore this function evaluates, instead, 844 * only the following stronger three sub-conditions, for which it is 845 * much easier to maintain the needed state: 846 * 1) all active queues have the same weight, 847 * 2) all active queues belong to the same I/O-priority class, 848 * 3) there is at most one active group. 849 * In particular, the last condition is always true if hierarchical 850 * support or the cgroups interface are not enabled, thus no state 851 * needs to be maintained in this case. 852 */ 853 static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, 854 struct bfq_queue *bfqq) 855 { 856 bool smallest_weight = bfqq && 857 bfqq->weight_counter && 858 bfqq->weight_counter == 859 container_of( 860 rb_first_cached(&bfqd->queue_weights_tree), 861 struct bfq_weight_counter, 862 weights_node); 863 864 /* 865 * For queue weights to differ, queue_weights_tree must contain 866 * at least two nodes. 867 */ 868 bool varied_queue_weights = !smallest_weight && 869 !RB_EMPTY_ROOT(&bfqd->queue_weights_tree.rb_root) && 870 (bfqd->queue_weights_tree.rb_root.rb_node->rb_left || 871 bfqd->queue_weights_tree.rb_root.rb_node->rb_right); 872 873 bool multiple_classes_busy = 874 (bfqd->busy_queues[0] && bfqd->busy_queues[1]) || 875 (bfqd->busy_queues[0] && bfqd->busy_queues[2]) || 876 (bfqd->busy_queues[1] && bfqd->busy_queues[2]); 877 878 return varied_queue_weights || multiple_classes_busy 879 #ifdef CONFIG_BFQ_GROUP_IOSCHED 880 || bfqd->num_groups_with_pending_reqs > 1 881 #endif 882 ; 883 } 884 885 /* 886 * If the weight-counter tree passed as input contains no counter for 887 * the weight of the input queue, then add that counter; otherwise just 888 * increment the existing counter. 889 * 890 * Note that weight-counter trees contain few nodes in mostly symmetric 891 * scenarios. For example, if all queues have the same weight, then the 892 * weight-counter tree for the queues may contain at most one node. 893 * This holds even if low_latency is on, because weight-raised queues 894 * are not inserted in the tree. 895 * In most scenarios, the rate at which nodes are created/destroyed 896 * should be low too. 897 */ 898 void bfq_weights_tree_add(struct bfq_queue *bfqq) 899 { 900 struct rb_root_cached *root = &bfqq->bfqd->queue_weights_tree; 901 struct bfq_entity *entity = &bfqq->entity; 902 struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL; 903 bool leftmost = true; 904 905 /* 906 * Do not insert if the queue is already associated with a 907 * counter, which happens if: 908 * 1) a request arrival has caused the queue to become both 909 * non-weight-raised, and hence change its weight, and 910 * backlogged; in this respect, each of the two events 911 * causes an invocation of this function, 912 * 2) this is the invocation of this function caused by the 913 * second event. This second invocation is actually useless, 914 * and we handle this fact by exiting immediately. More 915 * efficient or clearer solutions might possibly be adopted. 916 */ 917 if (bfqq->weight_counter) 918 return; 919 920 while (*new) { 921 struct bfq_weight_counter *__counter = container_of(*new, 922 struct bfq_weight_counter, 923 weights_node); 924 parent = *new; 925 926 if (entity->weight == __counter->weight) { 927 bfqq->weight_counter = __counter; 928 goto inc_counter; 929 } 930 if (entity->weight < __counter->weight) 931 new = &((*new)->rb_left); 932 else { 933 new = &((*new)->rb_right); 934 leftmost = false; 935 } 936 } 937 938 bfqq->weight_counter = kzalloc(sizeof(struct bfq_weight_counter), 939 GFP_ATOMIC); 940 941 /* 942 * In the unlucky event of an allocation failure, we just 943 * exit. This will cause the weight of queue to not be 944 * considered in bfq_asymmetric_scenario, which, in its turn, 945 * causes the scenario to be deemed wrongly symmetric in case 946 * bfqq's weight would have been the only weight making the 947 * scenario asymmetric. On the bright side, no unbalance will 948 * however occur when bfqq becomes inactive again (the 949 * invocation of this function is triggered by an activation 950 * of queue). In fact, bfq_weights_tree_remove does nothing 951 * if !bfqq->weight_counter. 952 */ 953 if (unlikely(!bfqq->weight_counter)) 954 return; 955 956 bfqq->weight_counter->weight = entity->weight; 957 rb_link_node(&bfqq->weight_counter->weights_node, parent, new); 958 rb_insert_color_cached(&bfqq->weight_counter->weights_node, root, 959 leftmost); 960 961 inc_counter: 962 bfqq->weight_counter->num_active++; 963 bfqq->ref++; 964 } 965 966 /* 967 * Decrement the weight counter associated with the queue, and, if the 968 * counter reaches 0, remove the counter from the tree. 969 * See the comments to the function bfq_weights_tree_add() for considerations 970 * about overhead. 971 */ 972 void bfq_weights_tree_remove(struct bfq_queue *bfqq) 973 { 974 struct rb_root_cached *root; 975 976 if (!bfqq->weight_counter) 977 return; 978 979 root = &bfqq->bfqd->queue_weights_tree; 980 bfqq->weight_counter->num_active--; 981 if (bfqq->weight_counter->num_active > 0) 982 goto reset_entity_pointer; 983 984 rb_erase_cached(&bfqq->weight_counter->weights_node, root); 985 kfree(bfqq->weight_counter); 986 987 reset_entity_pointer: 988 bfqq->weight_counter = NULL; 989 bfq_put_queue(bfqq); 990 } 991 992 /* 993 * Return expired entry, or NULL to just start from scratch in rbtree. 994 */ 995 static struct request *bfq_check_fifo(struct bfq_queue *bfqq, 996 struct request *last) 997 { 998 struct request *rq; 999 1000 if (bfq_bfqq_fifo_expire(bfqq)) 1001 return NULL; 1002 1003 bfq_mark_bfqq_fifo_expire(bfqq); 1004 1005 rq = rq_entry_fifo(bfqq->fifo.next); 1006 1007 if (rq == last || ktime_get_ns() < rq->fifo_time) 1008 return NULL; 1009 1010 bfq_log_bfqq(bfqq->bfqd, bfqq, "check_fifo: returned %p", rq); 1011 return rq; 1012 } 1013 1014 static struct request *bfq_find_next_rq(struct bfq_data *bfqd, 1015 struct bfq_queue *bfqq, 1016 struct request *last) 1017 { 1018 struct rb_node *rbnext = rb_next(&last->rb_node); 1019 struct rb_node *rbprev = rb_prev(&last->rb_node); 1020 struct request *next, *prev = NULL; 1021 1022 /* Follow expired path, else get first next available. */ 1023 next = bfq_check_fifo(bfqq, last); 1024 if (next) 1025 return next; 1026 1027 if (rbprev) 1028 prev = rb_entry_rq(rbprev); 1029 1030 if (rbnext) 1031 next = rb_entry_rq(rbnext); 1032 else { 1033 rbnext = rb_first(&bfqq->sort_list); 1034 if (rbnext && rbnext != &last->rb_node) 1035 next = rb_entry_rq(rbnext); 1036 } 1037 1038 return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last)); 1039 } 1040 1041 /* see the definition of bfq_async_charge_factor for details */ 1042 static unsigned long bfq_serv_to_charge(struct request *rq, 1043 struct bfq_queue *bfqq) 1044 { 1045 if (bfq_bfqq_sync(bfqq) || bfqq->wr_coeff > 1 || 1046 bfq_asymmetric_scenario(bfqq->bfqd, bfqq)) 1047 return blk_rq_sectors(rq); 1048 1049 return blk_rq_sectors(rq) * bfq_async_charge_factor; 1050 } 1051 1052 /** 1053 * bfq_updated_next_req - update the queue after a new next_rq selection. 1054 * @bfqd: the device data the queue belongs to. 1055 * @bfqq: the queue to update. 1056 * 1057 * If the first request of a queue changes we make sure that the queue 1058 * has enough budget to serve at least its first request (if the 1059 * request has grown). We do this because if the queue has not enough 1060 * budget for its first request, it has to go through two dispatch 1061 * rounds to actually get it dispatched. 1062 */ 1063 static void bfq_updated_next_req(struct bfq_data *bfqd, 1064 struct bfq_queue *bfqq) 1065 { 1066 struct bfq_entity *entity = &bfqq->entity; 1067 struct request *next_rq = bfqq->next_rq; 1068 unsigned long new_budget; 1069 1070 if (!next_rq) 1071 return; 1072 1073 if (bfqq == bfqd->in_service_queue) 1074 /* 1075 * In order not to break guarantees, budgets cannot be 1076 * changed after an entity has been selected. 1077 */ 1078 return; 1079 1080 new_budget = max_t(unsigned long, 1081 max_t(unsigned long, bfqq->max_budget, 1082 bfq_serv_to_charge(next_rq, bfqq)), 1083 entity->service); 1084 if (entity->budget != new_budget) { 1085 entity->budget = new_budget; 1086 bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu", 1087 new_budget); 1088 bfq_requeue_bfqq(bfqd, bfqq, false); 1089 } 1090 } 1091 1092 static unsigned int bfq_wr_duration(struct bfq_data *bfqd) 1093 { 1094 u64 dur; 1095 1096 dur = bfqd->rate_dur_prod; 1097 do_div(dur, bfqd->peak_rate); 1098 1099 /* 1100 * Limit duration between 3 and 25 seconds. The upper limit 1101 * has been conservatively set after the following worst case: 1102 * on a QEMU/KVM virtual machine 1103 * - running in a slow PC 1104 * - with a virtual disk stacked on a slow low-end 5400rpm HDD 1105 * - serving a heavy I/O workload, such as the sequential reading 1106 * of several files 1107 * mplayer took 23 seconds to start, if constantly weight-raised. 1108 * 1109 * As for higher values than that accommodating the above bad 1110 * scenario, tests show that higher values would often yield 1111 * the opposite of the desired result, i.e., would worsen 1112 * responsiveness by allowing non-interactive applications to 1113 * preserve weight raising for too long. 1114 * 1115 * On the other end, lower values than 3 seconds make it 1116 * difficult for most interactive tasks to complete their jobs 1117 * before weight-raising finishes. 1118 */ 1119 return clamp_val(dur, msecs_to_jiffies(3000), msecs_to_jiffies(25000)); 1120 } 1121 1122 /* switch back from soft real-time to interactive weight raising */ 1123 static void switch_back_to_interactive_wr(struct bfq_queue *bfqq, 1124 struct bfq_data *bfqd) 1125 { 1126 bfqq->wr_coeff = bfqd->bfq_wr_coeff; 1127 bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); 1128 bfqq->last_wr_start_finish = bfqq->wr_start_at_switch_to_srt; 1129 } 1130 1131 static void 1132 bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd, 1133 struct bfq_io_cq *bic, bool bfq_already_existing) 1134 { 1135 unsigned int old_wr_coeff = 1; 1136 bool busy = bfq_already_existing && bfq_bfqq_busy(bfqq); 1137 unsigned int a_idx = bfqq->actuator_idx; 1138 struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx]; 1139 1140 if (bfqq_data->saved_has_short_ttime) 1141 bfq_mark_bfqq_has_short_ttime(bfqq); 1142 else 1143 bfq_clear_bfqq_has_short_ttime(bfqq); 1144 1145 if (bfqq_data->saved_IO_bound) 1146 bfq_mark_bfqq_IO_bound(bfqq); 1147 else 1148 bfq_clear_bfqq_IO_bound(bfqq); 1149 1150 bfqq->last_serv_time_ns = bfqq_data->saved_last_serv_time_ns; 1151 bfqq->inject_limit = bfqq_data->saved_inject_limit; 1152 bfqq->decrease_time_jif = bfqq_data->saved_decrease_time_jif; 1153 1154 bfqq->entity.new_weight = bfqq_data->saved_weight; 1155 bfqq->ttime = bfqq_data->saved_ttime; 1156 bfqq->io_start_time = bfqq_data->saved_io_start_time; 1157 bfqq->tot_idle_time = bfqq_data->saved_tot_idle_time; 1158 /* 1159 * Restore weight coefficient only if low_latency is on 1160 */ 1161 if (bfqd->low_latency) { 1162 old_wr_coeff = bfqq->wr_coeff; 1163 bfqq->wr_coeff = bfqq_data->saved_wr_coeff; 1164 } 1165 bfqq->service_from_wr = bfqq_data->saved_service_from_wr; 1166 bfqq->wr_start_at_switch_to_srt = 1167 bfqq_data->saved_wr_start_at_switch_to_srt; 1168 bfqq->last_wr_start_finish = bfqq_data->saved_last_wr_start_finish; 1169 bfqq->wr_cur_max_time = bfqq_data->saved_wr_cur_max_time; 1170 1171 if (bfqq->wr_coeff > 1 && (bfq_bfqq_in_large_burst(bfqq) || 1172 time_is_before_jiffies(bfqq->last_wr_start_finish + 1173 bfqq->wr_cur_max_time))) { 1174 if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && 1175 !bfq_bfqq_in_large_burst(bfqq) && 1176 time_is_after_eq_jiffies(bfqq->wr_start_at_switch_to_srt + 1177 bfq_wr_duration(bfqd))) { 1178 switch_back_to_interactive_wr(bfqq, bfqd); 1179 } else { 1180 bfqq->wr_coeff = 1; 1181 bfq_log_bfqq(bfqq->bfqd, bfqq, 1182 "resume state: switching off wr"); 1183 } 1184 } 1185 1186 /* make sure weight will be updated, however we got here */ 1187 bfqq->entity.prio_changed = 1; 1188 1189 if (likely(!busy)) 1190 return; 1191 1192 if (old_wr_coeff == 1 && bfqq->wr_coeff > 1) 1193 bfqd->wr_busy_queues++; 1194 else if (old_wr_coeff > 1 && bfqq->wr_coeff == 1) 1195 bfqd->wr_busy_queues--; 1196 } 1197 1198 static int bfqq_process_refs(struct bfq_queue *bfqq) 1199 { 1200 return bfqq->ref - bfqq->entity.allocated - 1201 bfqq->entity.on_st_or_in_serv - 1202 (bfqq->weight_counter != NULL) - bfqq->stable_ref; 1203 } 1204 1205 /* Empty burst list and add just bfqq (see comments on bfq_handle_burst) */ 1206 static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq) 1207 { 1208 struct bfq_queue *item; 1209 struct hlist_node *n; 1210 1211 hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node) 1212 hlist_del_init(&item->burst_list_node); 1213 1214 /* 1215 * Start the creation of a new burst list only if there is no 1216 * active queue. See comments on the conditional invocation of 1217 * bfq_handle_burst(). 1218 */ 1219 if (bfq_tot_busy_queues(bfqd) == 0) { 1220 hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list); 1221 bfqd->burst_size = 1; 1222 } else 1223 bfqd->burst_size = 0; 1224 1225 bfqd->burst_parent_entity = bfqq->entity.parent; 1226 } 1227 1228 /* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */ 1229 static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq) 1230 { 1231 /* Increment burst size to take into account also bfqq */ 1232 bfqd->burst_size++; 1233 1234 if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) { 1235 struct bfq_queue *pos, *bfqq_item; 1236 struct hlist_node *n; 1237 1238 /* 1239 * Enough queues have been activated shortly after each 1240 * other to consider this burst as large. 1241 */ 1242 bfqd->large_burst = true; 1243 1244 /* 1245 * We can now mark all queues in the burst list as 1246 * belonging to a large burst. 1247 */ 1248 hlist_for_each_entry(bfqq_item, &bfqd->burst_list, 1249 burst_list_node) 1250 bfq_mark_bfqq_in_large_burst(bfqq_item); 1251 bfq_mark_bfqq_in_large_burst(bfqq); 1252 1253 /* 1254 * From now on, and until the current burst finishes, any 1255 * new queue being activated shortly after the last queue 1256 * was inserted in the burst can be immediately marked as 1257 * belonging to a large burst. So the burst list is not 1258 * needed any more. Remove it. 1259 */ 1260 hlist_for_each_entry_safe(pos, n, &bfqd->burst_list, 1261 burst_list_node) 1262 hlist_del_init(&pos->burst_list_node); 1263 } else /* 1264 * Burst not yet large: add bfqq to the burst list. Do 1265 * not increment the ref counter for bfqq, because bfqq 1266 * is removed from the burst list before freeing bfqq 1267 * in put_queue. 1268 */ 1269 hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list); 1270 } 1271 1272 /* 1273 * If many queues belonging to the same group happen to be created 1274 * shortly after each other, then the processes associated with these 1275 * queues have typically a common goal. In particular, bursts of queue 1276 * creations are usually caused by services or applications that spawn 1277 * many parallel threads/processes. Examples are systemd during boot, 1278 * or git grep. To help these processes get their job done as soon as 1279 * possible, it is usually better to not grant either weight-raising 1280 * or device idling to their queues, unless these queues must be 1281 * protected from the I/O flowing through other active queues. 1282 * 1283 * In this comment we describe, firstly, the reasons why this fact 1284 * holds, and, secondly, the next function, which implements the main 1285 * steps needed to properly mark these queues so that they can then be 1286 * treated in a different way. 1287 * 1288 * The above services or applications benefit mostly from a high 1289 * throughput: the quicker the requests of the activated queues are 1290 * cumulatively served, the sooner the target job of these queues gets 1291 * completed. As a consequence, weight-raising any of these queues, 1292 * which also implies idling the device for it, is almost always 1293 * counterproductive, unless there are other active queues to isolate 1294 * these new queues from. If there no other active queues, then 1295 * weight-raising these new queues just lowers throughput in most 1296 * cases. 1297 * 1298 * On the other hand, a burst of queue creations may be caused also by 1299 * the start of an application that does not consist of a lot of 1300 * parallel I/O-bound threads. In fact, with a complex application, 1301 * several short processes may need to be executed to start-up the 1302 * application. In this respect, to start an application as quickly as 1303 * possible, the best thing to do is in any case to privilege the I/O 1304 * related to the application with respect to all other 1305 * I/O. Therefore, the best strategy to start as quickly as possible 1306 * an application that causes a burst of queue creations is to 1307 * weight-raise all the queues created during the burst. This is the 1308 * exact opposite of the best strategy for the other type of bursts. 1309 * 1310 * In the end, to take the best action for each of the two cases, the 1311 * two types of bursts need to be distinguished. Fortunately, this 1312 * seems relatively easy, by looking at the sizes of the bursts. In 1313 * particular, we found a threshold such that only bursts with a 1314 * larger size than that threshold are apparently caused by 1315 * services or commands such as systemd or git grep. For brevity, 1316 * hereafter we call just 'large' these bursts. BFQ *does not* 1317 * weight-raise queues whose creation occurs in a large burst. In 1318 * addition, for each of these queues BFQ performs or does not perform 1319 * idling depending on which choice boosts the throughput more. The 1320 * exact choice depends on the device and request pattern at 1321 * hand. 1322 * 1323 * Unfortunately, false positives may occur while an interactive task 1324 * is starting (e.g., an application is being started). The 1325 * consequence is that the queues associated with the task do not 1326 * enjoy weight raising as expected. Fortunately these false positives 1327 * are very rare. They typically occur if some service happens to 1328 * start doing I/O exactly when the interactive task starts. 1329 * 1330 * Turning back to the next function, it is invoked only if there are 1331 * no active queues (apart from active queues that would belong to the 1332 * same, possible burst bfqq would belong to), and it implements all 1333 * the steps needed to detect the occurrence of a large burst and to 1334 * properly mark all the queues belonging to it (so that they can then 1335 * be treated in a different way). This goal is achieved by 1336 * maintaining a "burst list" that holds, temporarily, the queues that 1337 * belong to the burst in progress. The list is then used to mark 1338 * these queues as belonging to a large burst if the burst does become 1339 * large. The main steps are the following. 1340 * 1341 * . when the very first queue is created, the queue is inserted into the 1342 * list (as it could be the first queue in a possible burst) 1343 * 1344 * . if the current burst has not yet become large, and a queue Q that does 1345 * not yet belong to the burst is activated shortly after the last time 1346 * at which a new queue entered the burst list, then the function appends 1347 * Q to the burst list 1348 * 1349 * . if, as a consequence of the previous step, the burst size reaches 1350 * the large-burst threshold, then 1351 * 1352 * . all the queues in the burst list are marked as belonging to a 1353 * large burst 1354 * 1355 * . the burst list is deleted; in fact, the burst list already served 1356 * its purpose (keeping temporarily track of the queues in a burst, 1357 * so as to be able to mark them as belonging to a large burst in the 1358 * previous sub-step), and now is not needed any more 1359 * 1360 * . the device enters a large-burst mode 1361 * 1362 * . if a queue Q that does not belong to the burst is created while 1363 * the device is in large-burst mode and shortly after the last time 1364 * at which a queue either entered the burst list or was marked as 1365 * belonging to the current large burst, then Q is immediately marked 1366 * as belonging to a large burst. 1367 * 1368 * . if a queue Q that does not belong to the burst is created a while 1369 * later, i.e., not shortly after, than the last time at which a queue 1370 * either entered the burst list or was marked as belonging to the 1371 * current large burst, then the current burst is deemed as finished and: 1372 * 1373 * . the large-burst mode is reset if set 1374 * 1375 * . the burst list is emptied 1376 * 1377 * . Q is inserted in the burst list, as Q may be the first queue 1378 * in a possible new burst (then the burst list contains just Q 1379 * after this step). 1380 */ 1381 static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq) 1382 { 1383 /* 1384 * If bfqq is already in the burst list or is part of a large 1385 * burst, or finally has just been split, then there is 1386 * nothing else to do. 1387 */ 1388 if (!hlist_unhashed(&bfqq->burst_list_node) || 1389 bfq_bfqq_in_large_burst(bfqq) || 1390 time_is_after_eq_jiffies(bfqq->split_time + 1391 msecs_to_jiffies(10))) 1392 return; 1393 1394 /* 1395 * If bfqq's creation happens late enough, or bfqq belongs to 1396 * a different group than the burst group, then the current 1397 * burst is finished, and related data structures must be 1398 * reset. 1399 * 1400 * In this respect, consider the special case where bfqq is 1401 * the very first queue created after BFQ is selected for this 1402 * device. In this case, last_ins_in_burst and 1403 * burst_parent_entity are not yet significant when we get 1404 * here. But it is easy to verify that, whether or not the 1405 * following condition is true, bfqq will end up being 1406 * inserted into the burst list. In particular the list will 1407 * happen to contain only bfqq. And this is exactly what has 1408 * to happen, as bfqq may be the first queue of the first 1409 * burst. 1410 */ 1411 if (time_is_before_jiffies(bfqd->last_ins_in_burst + 1412 bfqd->bfq_burst_interval) || 1413 bfqq->entity.parent != bfqd->burst_parent_entity) { 1414 bfqd->large_burst = false; 1415 bfq_reset_burst_list(bfqd, bfqq); 1416 goto end; 1417 } 1418 1419 /* 1420 * If we get here, then bfqq is being activated shortly after the 1421 * last queue. So, if the current burst is also large, we can mark 1422 * bfqq as belonging to this large burst immediately. 1423 */ 1424 if (bfqd->large_burst) { 1425 bfq_mark_bfqq_in_large_burst(bfqq); 1426 goto end; 1427 } 1428 1429 /* 1430 * If we get here, then a large-burst state has not yet been 1431 * reached, but bfqq is being activated shortly after the last 1432 * queue. Then we add bfqq to the burst. 1433 */ 1434 bfq_add_to_burst(bfqd, bfqq); 1435 end: 1436 /* 1437 * At this point, bfqq either has been added to the current 1438 * burst or has caused the current burst to terminate and a 1439 * possible new burst to start. In particular, in the second 1440 * case, bfqq has become the first queue in the possible new 1441 * burst. In both cases last_ins_in_burst needs to be moved 1442 * forward. 1443 */ 1444 bfqd->last_ins_in_burst = jiffies; 1445 } 1446 1447 static int bfq_bfqq_budget_left(struct bfq_queue *bfqq) 1448 { 1449 struct bfq_entity *entity = &bfqq->entity; 1450 1451 return entity->budget - entity->service; 1452 } 1453 1454 /* 1455 * If enough samples have been computed, return the current max budget 1456 * stored in bfqd, which is dynamically updated according to the 1457 * estimated disk peak rate; otherwise return the default max budget 1458 */ 1459 static int bfq_max_budget(struct bfq_data *bfqd) 1460 { 1461 if (bfqd->budgets_assigned < bfq_stats_min_budgets) 1462 return bfq_default_max_budget; 1463 else 1464 return bfqd->bfq_max_budget; 1465 } 1466 1467 /* 1468 * Return min budget, which is a fraction of the current or default 1469 * max budget (trying with 1/32) 1470 */ 1471 static int bfq_min_budget(struct bfq_data *bfqd) 1472 { 1473 if (bfqd->budgets_assigned < bfq_stats_min_budgets) 1474 return bfq_default_max_budget / 32; 1475 else 1476 return bfqd->bfq_max_budget / 32; 1477 } 1478 1479 /* 1480 * The next function, invoked after the input queue bfqq switches from 1481 * idle to busy, updates the budget of bfqq. The function also tells 1482 * whether the in-service queue should be expired, by returning 1483 * true. The purpose of expiring the in-service queue is to give bfqq 1484 * the chance to possibly preempt the in-service queue, and the reason 1485 * for preempting the in-service queue is to achieve one of the two 1486 * goals below. 1487 * 1488 * 1. Guarantee to bfqq its reserved bandwidth even if bfqq has 1489 * expired because it has remained idle. In particular, bfqq may have 1490 * expired for one of the following two reasons: 1491 * 1492 * - BFQQE_NO_MORE_REQUESTS bfqq did not enjoy any device idling 1493 * and did not make it to issue a new request before its last 1494 * request was served; 1495 * 1496 * - BFQQE_TOO_IDLE bfqq did enjoy device idling, but did not issue 1497 * a new request before the expiration of the idling-time. 1498 * 1499 * Even if bfqq has expired for one of the above reasons, the process 1500 * associated with the queue may be however issuing requests greedily, 1501 * and thus be sensitive to the bandwidth it receives (bfqq may have 1502 * remained idle for other reasons: CPU high load, bfqq not enjoying 1503 * idling, I/O throttling somewhere in the path from the process to 1504 * the I/O scheduler, ...). But if, after every expiration for one of 1505 * the above two reasons, bfqq has to wait for the service of at least 1506 * one full budget of another queue before being served again, then 1507 * bfqq is likely to get a much lower bandwidth or resource time than 1508 * its reserved ones. To address this issue, two countermeasures need 1509 * to be taken. 1510 * 1511 * First, the budget and the timestamps of bfqq need to be updated in 1512 * a special way on bfqq reactivation: they need to be updated as if 1513 * bfqq did not remain idle and did not expire. In fact, if they are 1514 * computed as if bfqq expired and remained idle until reactivation, 1515 * then the process associated with bfqq is treated as if, instead of 1516 * being greedy, it stopped issuing requests when bfqq remained idle, 1517 * and restarts issuing requests only on this reactivation. In other 1518 * words, the scheduler does not help the process recover the "service 1519 * hole" between bfqq expiration and reactivation. As a consequence, 1520 * the process receives a lower bandwidth than its reserved one. In 1521 * contrast, to recover this hole, the budget must be updated as if 1522 * bfqq was not expired at all before this reactivation, i.e., it must 1523 * be set to the value of the remaining budget when bfqq was 1524 * expired. Along the same line, timestamps need to be assigned the 1525 * value they had the last time bfqq was selected for service, i.e., 1526 * before last expiration. Thus timestamps need to be back-shifted 1527 * with respect to their normal computation (see [1] for more details 1528 * on this tricky aspect). 1529 * 1530 * Secondly, to allow the process to recover the hole, the in-service 1531 * queue must be expired too, to give bfqq the chance to preempt it 1532 * immediately. In fact, if bfqq has to wait for a full budget of the 1533 * in-service queue to be completed, then it may become impossible to 1534 * let the process recover the hole, even if the back-shifted 1535 * timestamps of bfqq are lower than those of the in-service queue. If 1536 * this happens for most or all of the holes, then the process may not 1537 * receive its reserved bandwidth. In this respect, it is worth noting 1538 * that, being the service of outstanding requests unpreemptible, a 1539 * little fraction of the holes may however be unrecoverable, thereby 1540 * causing a little loss of bandwidth. 1541 * 1542 * The last important point is detecting whether bfqq does need this 1543 * bandwidth recovery. In this respect, the next function deems the 1544 * process associated with bfqq greedy, and thus allows it to recover 1545 * the hole, if: 1) the process is waiting for the arrival of a new 1546 * request (which implies that bfqq expired for one of the above two 1547 * reasons), and 2) such a request has arrived soon. The first 1548 * condition is controlled through the flag non_blocking_wait_rq, 1549 * while the second through the flag arrived_in_time. If both 1550 * conditions hold, then the function computes the budget in the 1551 * above-described special way, and signals that the in-service queue 1552 * should be expired. Timestamp back-shifting is done later in 1553 * __bfq_activate_entity. 1554 * 1555 * 2. Reduce latency. Even if timestamps are not backshifted to let 1556 * the process associated with bfqq recover a service hole, bfqq may 1557 * however happen to have, after being (re)activated, a lower finish 1558 * timestamp than the in-service queue. That is, the next budget of 1559 * bfqq may have to be completed before the one of the in-service 1560 * queue. If this is the case, then preempting the in-service queue 1561 * allows this goal to be achieved, apart from the unpreemptible, 1562 * outstanding requests mentioned above. 1563 * 1564 * Unfortunately, regardless of which of the above two goals one wants 1565 * to achieve, service trees need first to be updated to know whether 1566 * the in-service queue must be preempted. To have service trees 1567 * correctly updated, the in-service queue must be expired and 1568 * rescheduled, and bfqq must be scheduled too. This is one of the 1569 * most costly operations (in future versions, the scheduling 1570 * mechanism may be re-designed in such a way to make it possible to 1571 * know whether preemption is needed without needing to update service 1572 * trees). In addition, queue preemptions almost always cause random 1573 * I/O, which may in turn cause loss of throughput. Finally, there may 1574 * even be no in-service queue when the next function is invoked (so, 1575 * no queue to compare timestamps with). Because of these facts, the 1576 * next function adopts the following simple scheme to avoid costly 1577 * operations, too frequent preemptions and too many dependencies on 1578 * the state of the scheduler: it requests the expiration of the 1579 * in-service queue (unconditionally) only for queues that need to 1580 * recover a hole. Then it delegates to other parts of the code the 1581 * responsibility of handling the above case 2. 1582 */ 1583 static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, 1584 struct bfq_queue *bfqq, 1585 bool arrived_in_time) 1586 { 1587 struct bfq_entity *entity = &bfqq->entity; 1588 1589 /* 1590 * In the next compound condition, we check also whether there 1591 * is some budget left, because otherwise there is no point in 1592 * trying to go on serving bfqq with this same budget: bfqq 1593 * would be expired immediately after being selected for 1594 * service. This would only cause useless overhead. 1595 */ 1596 if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time && 1597 bfq_bfqq_budget_left(bfqq) > 0) { 1598 /* 1599 * We do not clear the flag non_blocking_wait_rq here, as 1600 * the latter is used in bfq_activate_bfqq to signal 1601 * that timestamps need to be back-shifted (and is 1602 * cleared right after). 1603 */ 1604 1605 /* 1606 * In next assignment we rely on that either 1607 * entity->service or entity->budget are not updated 1608 * on expiration if bfqq is empty (see 1609 * __bfq_bfqq_recalc_budget). Thus both quantities 1610 * remain unchanged after such an expiration, and the 1611 * following statement therefore assigns to 1612 * entity->budget the remaining budget on such an 1613 * expiration. 1614 */ 1615 entity->budget = min_t(unsigned long, 1616 bfq_bfqq_budget_left(bfqq), 1617 bfqq->max_budget); 1618 1619 /* 1620 * At this point, we have used entity->service to get 1621 * the budget left (needed for updating 1622 * entity->budget). Thus we finally can, and have to, 1623 * reset entity->service. The latter must be reset 1624 * because bfqq would otherwise be charged again for 1625 * the service it has received during its previous 1626 * service slot(s). 1627 */ 1628 entity->service = 0; 1629 1630 return true; 1631 } 1632 1633 /* 1634 * We can finally complete expiration, by setting service to 0. 1635 */ 1636 entity->service = 0; 1637 entity->budget = max_t(unsigned long, bfqq->max_budget, 1638 bfq_serv_to_charge(bfqq->next_rq, bfqq)); 1639 bfq_clear_bfqq_non_blocking_wait_rq(bfqq); 1640 return false; 1641 } 1642 1643 /* 1644 * Return the farthest past time instant according to jiffies 1645 * macros. 1646 */ 1647 static unsigned long bfq_smallest_from_now(void) 1648 { 1649 return jiffies - MAX_JIFFY_OFFSET; 1650 } 1651 1652 static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd, 1653 struct bfq_queue *bfqq, 1654 unsigned int old_wr_coeff, 1655 bool wr_or_deserves_wr, 1656 bool interactive, 1657 bool in_burst, 1658 bool soft_rt) 1659 { 1660 if (old_wr_coeff == 1 && wr_or_deserves_wr) { 1661 /* start a weight-raising period */ 1662 if (interactive) { 1663 bfqq->service_from_wr = 0; 1664 bfqq->wr_coeff = bfqd->bfq_wr_coeff; 1665 bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); 1666 } else { 1667 /* 1668 * No interactive weight raising in progress 1669 * here: assign minus infinity to 1670 * wr_start_at_switch_to_srt, to make sure 1671 * that, at the end of the soft-real-time 1672 * weight raising periods that is starting 1673 * now, no interactive weight-raising period 1674 * may be wrongly considered as still in 1675 * progress (and thus actually started by 1676 * mistake). 1677 */ 1678 bfqq->wr_start_at_switch_to_srt = 1679 bfq_smallest_from_now(); 1680 bfqq->wr_coeff = bfqd->bfq_wr_coeff * 1681 BFQ_SOFTRT_WEIGHT_FACTOR; 1682 bfqq->wr_cur_max_time = 1683 bfqd->bfq_wr_rt_max_time; 1684 } 1685 1686 /* 1687 * If needed, further reduce budget to make sure it is 1688 * close to bfqq's backlog, so as to reduce the 1689 * scheduling-error component due to a too large 1690 * budget. Do not care about throughput consequences, 1691 * but only about latency. Finally, do not assign a 1692 * too small budget either, to avoid increasing 1693 * latency by causing too frequent expirations. 1694 */ 1695 bfqq->entity.budget = min_t(unsigned long, 1696 bfqq->entity.budget, 1697 2 * bfq_min_budget(bfqd)); 1698 } else if (old_wr_coeff > 1) { 1699 if (interactive) { /* update wr coeff and duration */ 1700 bfqq->wr_coeff = bfqd->bfq_wr_coeff; 1701 bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); 1702 } else if (in_burst) 1703 bfqq->wr_coeff = 1; 1704 else if (soft_rt) { 1705 /* 1706 * The application is now or still meeting the 1707 * requirements for being deemed soft rt. We 1708 * can then correctly and safely (re)charge 1709 * the weight-raising duration for the 1710 * application with the weight-raising 1711 * duration for soft rt applications. 1712 * 1713 * In particular, doing this recharge now, i.e., 1714 * before the weight-raising period for the 1715 * application finishes, reduces the probability 1716 * of the following negative scenario: 1717 * 1) the weight of a soft rt application is 1718 * raised at startup (as for any newly 1719 * created application), 1720 * 2) since the application is not interactive, 1721 * at a certain time weight-raising is 1722 * stopped for the application, 1723 * 3) at that time the application happens to 1724 * still have pending requests, and hence 1725 * is destined to not have a chance to be 1726 * deemed soft rt before these requests are 1727 * completed (see the comments to the 1728 * function bfq_bfqq_softrt_next_start() 1729 * for details on soft rt detection), 1730 * 4) these pending requests experience a high 1731 * latency because the application is not 1732 * weight-raised while they are pending. 1733 */ 1734 if (bfqq->wr_cur_max_time != 1735 bfqd->bfq_wr_rt_max_time) { 1736 bfqq->wr_start_at_switch_to_srt = 1737 bfqq->last_wr_start_finish; 1738 1739 bfqq->wr_cur_max_time = 1740 bfqd->bfq_wr_rt_max_time; 1741 bfqq->wr_coeff = bfqd->bfq_wr_coeff * 1742 BFQ_SOFTRT_WEIGHT_FACTOR; 1743 } 1744 bfqq->last_wr_start_finish = jiffies; 1745 } 1746 } 1747 } 1748 1749 static bool bfq_bfqq_idle_for_long_time(struct bfq_data *bfqd, 1750 struct bfq_queue *bfqq) 1751 { 1752 return bfqq->dispatched == 0 && 1753 time_is_before_jiffies( 1754 bfqq->budget_timeout + 1755 bfqd->bfq_wr_min_idle_time); 1756 } 1757 1758 1759 /* 1760 * Return true if bfqq is in a higher priority class, or has a higher 1761 * weight than the in-service queue. 1762 */ 1763 static bool bfq_bfqq_higher_class_or_weight(struct bfq_queue *bfqq, 1764 struct bfq_queue *in_serv_bfqq) 1765 { 1766 int bfqq_weight, in_serv_weight; 1767 1768 if (bfqq->ioprio_class < in_serv_bfqq->ioprio_class) 1769 return true; 1770 1771 if (in_serv_bfqq->entity.parent == bfqq->entity.parent) { 1772 bfqq_weight = bfqq->entity.weight; 1773 in_serv_weight = in_serv_bfqq->entity.weight; 1774 } else { 1775 if (bfqq->entity.parent) 1776 bfqq_weight = bfqq->entity.parent->weight; 1777 else 1778 bfqq_weight = bfqq->entity.weight; 1779 if (in_serv_bfqq->entity.parent) 1780 in_serv_weight = in_serv_bfqq->entity.parent->weight; 1781 else 1782 in_serv_weight = in_serv_bfqq->entity.weight; 1783 } 1784 1785 return bfqq_weight > in_serv_weight; 1786 } 1787 1788 /* 1789 * Get the index of the actuator that will serve bio. 1790 */ 1791 static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio) 1792 { 1793 unsigned int i; 1794 sector_t end; 1795 1796 /* no search needed if one or zero ranges present */ 1797 if (bfqd->num_actuators == 1) 1798 return 0; 1799 1800 /* bio_end_sector(bio) gives the sector after the last one */ 1801 end = bio_end_sector(bio) - 1; 1802 1803 for (i = 0; i < bfqd->num_actuators; i++) { 1804 if (end >= bfqd->sector[i] && 1805 end < bfqd->sector[i] + bfqd->nr_sectors[i]) 1806 return i; 1807 } 1808 1809 WARN_ONCE(true, 1810 "bfq_actuator_index: bio sector out of ranges: end=%llu\n", 1811 end); 1812 return 0; 1813 } 1814 1815 static bool bfq_better_to_idle(struct bfq_queue *bfqq); 1816 1817 static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, 1818 struct bfq_queue *bfqq, 1819 int old_wr_coeff, 1820 struct request *rq, 1821 bool *interactive) 1822 { 1823 bool soft_rt, in_burst, wr_or_deserves_wr, 1824 bfqq_wants_to_preempt, 1825 idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq), 1826 /* 1827 * See the comments on 1828 * bfq_bfqq_update_budg_for_activation for 1829 * details on the usage of the next variable. 1830 */ 1831 arrived_in_time = ktime_get_ns() <= 1832 bfqq->ttime.last_end_request + 1833 bfqd->bfq_slice_idle * 3; 1834 unsigned int act_idx = bfq_actuator_index(bfqd, rq->bio); 1835 bool bfqq_non_merged_or_stably_merged = 1836 bfqq->bic || RQ_BIC(rq)->bfqq_data[act_idx].stably_merged; 1837 1838 /* 1839 * bfqq deserves to be weight-raised if: 1840 * - it is sync, 1841 * - it does not belong to a large burst, 1842 * - it has been idle for enough time or is soft real-time, 1843 * - is linked to a bfq_io_cq (it is not shared in any sense), 1844 * - has a default weight (otherwise we assume the user wanted 1845 * to control its weight explicitly) 1846 */ 1847 in_burst = bfq_bfqq_in_large_burst(bfqq); 1848 soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 && 1849 !BFQQ_TOTALLY_SEEKY(bfqq) && 1850 !in_burst && 1851 time_is_before_jiffies(bfqq->soft_rt_next_start) && 1852 bfqq->dispatched == 0 && 1853 bfqq->entity.new_weight == 40; 1854 *interactive = !in_burst && idle_for_long_time && 1855 bfqq->entity.new_weight == 40; 1856 /* 1857 * Merged bfq_queues are kept out of weight-raising 1858 * (low-latency) mechanisms. The reason is that these queues 1859 * are usually created for non-interactive and 1860 * non-soft-real-time tasks. Yet this is not the case for 1861 * stably-merged queues. These queues are merged just because 1862 * they are created shortly after each other. So they may 1863 * easily serve the I/O of an interactive or soft-real time 1864 * application, if the application happens to spawn multiple 1865 * processes. So let also stably-merged queued enjoy weight 1866 * raising. 1867 */ 1868 wr_or_deserves_wr = bfqd->low_latency && 1869 (bfqq->wr_coeff > 1 || 1870 (bfq_bfqq_sync(bfqq) && bfqq_non_merged_or_stably_merged && 1871 (*interactive || soft_rt))); 1872 1873 /* 1874 * Using the last flag, update budget and check whether bfqq 1875 * may want to preempt the in-service queue. 1876 */ 1877 bfqq_wants_to_preempt = 1878 bfq_bfqq_update_budg_for_activation(bfqd, bfqq, 1879 arrived_in_time); 1880 1881 /* 1882 * If bfqq happened to be activated in a burst, but has been 1883 * idle for much more than an interactive queue, then we 1884 * assume that, in the overall I/O initiated in the burst, the 1885 * I/O associated with bfqq is finished. So bfqq does not need 1886 * to be treated as a queue belonging to a burst 1887 * anymore. Accordingly, we reset bfqq's in_large_burst flag 1888 * if set, and remove bfqq from the burst list if it's 1889 * there. We do not decrement burst_size, because the fact 1890 * that bfqq does not need to belong to the burst list any 1891 * more does not invalidate the fact that bfqq was created in 1892 * a burst. 1893 */ 1894 if (likely(!bfq_bfqq_just_created(bfqq)) && 1895 idle_for_long_time && 1896 time_is_before_jiffies( 1897 bfqq->budget_timeout + 1898 msecs_to_jiffies(10000))) { 1899 hlist_del_init(&bfqq->burst_list_node); 1900 bfq_clear_bfqq_in_large_burst(bfqq); 1901 } 1902 1903 bfq_clear_bfqq_just_created(bfqq); 1904 1905 if (bfqd->low_latency) { 1906 if (unlikely(time_is_after_jiffies(bfqq->split_time))) 1907 /* wraparound */ 1908 bfqq->split_time = 1909 jiffies - bfqd->bfq_wr_min_idle_time - 1; 1910 1911 if (time_is_before_jiffies(bfqq->split_time + 1912 bfqd->bfq_wr_min_idle_time)) { 1913 bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq, 1914 old_wr_coeff, 1915 wr_or_deserves_wr, 1916 *interactive, 1917 in_burst, 1918 soft_rt); 1919 1920 if (old_wr_coeff != bfqq->wr_coeff) 1921 bfqq->entity.prio_changed = 1; 1922 } 1923 } 1924 1925 bfqq->last_idle_bklogged = jiffies; 1926 bfqq->service_from_backlogged = 0; 1927 bfq_clear_bfqq_softrt_update(bfqq); 1928 1929 bfq_add_bfqq_busy(bfqq); 1930 1931 /* 1932 * Expire in-service queue if preemption may be needed for 1933 * guarantees or throughput. As for guarantees, we care 1934 * explicitly about two cases. The first is that bfqq has to 1935 * recover a service hole, as explained in the comments on 1936 * bfq_bfqq_update_budg_for_activation(), i.e., that 1937 * bfqq_wants_to_preempt is true. However, if bfqq does not 1938 * carry time-critical I/O, then bfqq's bandwidth is less 1939 * important than that of queues that carry time-critical I/O. 1940 * So, as a further constraint, we consider this case only if 1941 * bfqq is at least as weight-raised, i.e., at least as time 1942 * critical, as the in-service queue. 1943 * 1944 * The second case is that bfqq is in a higher priority class, 1945 * or has a higher weight than the in-service queue. If this 1946 * condition does not hold, we don't care because, even if 1947 * bfqq does not start to be served immediately, the resulting 1948 * delay for bfqq's I/O is however lower or much lower than 1949 * the ideal completion time to be guaranteed to bfqq's I/O. 1950 * 1951 * In both cases, preemption is needed only if, according to 1952 * the timestamps of both bfqq and of the in-service queue, 1953 * bfqq actually is the next queue to serve. So, to reduce 1954 * useless preemptions, the return value of 1955 * next_queue_may_preempt() is considered in the next compound 1956 * condition too. Yet next_queue_may_preempt() just checks a 1957 * simple, necessary condition for bfqq to be the next queue 1958 * to serve. In fact, to evaluate a sufficient condition, the 1959 * timestamps of the in-service queue would need to be 1960 * updated, and this operation is quite costly (see the 1961 * comments on bfq_bfqq_update_budg_for_activation()). 1962 * 1963 * As for throughput, we ask bfq_better_to_idle() whether we 1964 * still need to plug I/O dispatching. If bfq_better_to_idle() 1965 * says no, then plugging is not needed any longer, either to 1966 * boost throughput or to perserve service guarantees. Then 1967 * the best option is to stop plugging I/O, as not doing so 1968 * would certainly lower throughput. We may end up in this 1969 * case if: (1) upon a dispatch attempt, we detected that it 1970 * was better to plug I/O dispatch, and to wait for a new 1971 * request to arrive for the currently in-service queue, but 1972 * (2) this switch of bfqq to busy changes the scenario. 1973 */ 1974 if (bfqd->in_service_queue && 1975 ((bfqq_wants_to_preempt && 1976 bfqq->wr_coeff >= bfqd->in_service_queue->wr_coeff) || 1977 bfq_bfqq_higher_class_or_weight(bfqq, bfqd->in_service_queue) || 1978 !bfq_better_to_idle(bfqd->in_service_queue)) && 1979 next_queue_may_preempt(bfqd)) 1980 bfq_bfqq_expire(bfqd, bfqd->in_service_queue, 1981 false, BFQQE_PREEMPTED); 1982 } 1983 1984 static void bfq_reset_inject_limit(struct bfq_data *bfqd, 1985 struct bfq_queue *bfqq) 1986 { 1987 /* invalidate baseline total service time */ 1988 bfqq->last_serv_time_ns = 0; 1989 1990 /* 1991 * Reset pointer in case we are waiting for 1992 * some request completion. 1993 */ 1994 bfqd->waited_rq = NULL; 1995 1996 /* 1997 * If bfqq has a short think time, then start by setting the 1998 * inject limit to 0 prudentially, because the service time of 1999 * an injected I/O request may be higher than the think time 2000 * of bfqq, and therefore, if one request was injected when 2001 * bfqq remains empty, this injected request might delay the 2002 * service of the next I/O request for bfqq significantly. In 2003 * case bfqq can actually tolerate some injection, then the 2004 * adaptive update will however raise the limit soon. This 2005 * lucky circumstance holds exactly because bfqq has a short 2006 * think time, and thus, after remaining empty, is likely to 2007 * get new I/O enqueued---and then completed---before being 2008 * expired. This is the very pattern that gives the 2009 * limit-update algorithm the chance to measure the effect of 2010 * injection on request service times, and then to update the 2011 * limit accordingly. 2012 * 2013 * However, in the following special case, the inject limit is 2014 * left to 1 even if the think time is short: bfqq's I/O is 2015 * synchronized with that of some other queue, i.e., bfqq may 2016 * receive new I/O only after the I/O of the other queue is 2017 * completed. Keeping the inject limit to 1 allows the 2018 * blocking I/O to be served while bfqq is in service. And 2019 * this is very convenient both for bfqq and for overall 2020 * throughput, as explained in detail in the comments in 2021 * bfq_update_has_short_ttime(). 2022 * 2023 * On the opposite end, if bfqq has a long think time, then 2024 * start directly by 1, because: 2025 * a) on the bright side, keeping at most one request in 2026 * service in the drive is unlikely to cause any harm to the 2027 * latency of bfqq's requests, as the service time of a single 2028 * request is likely to be lower than the think time of bfqq; 2029 * b) on the downside, after becoming empty, bfqq is likely to 2030 * expire before getting its next request. With this request 2031 * arrival pattern, it is very hard to sample total service 2032 * times and update the inject limit accordingly (see comments 2033 * on bfq_update_inject_limit()). So the limit is likely to be 2034 * never, or at least seldom, updated. As a consequence, by 2035 * setting the limit to 1, we avoid that no injection ever 2036 * occurs with bfqq. On the downside, this proactive step 2037 * further reduces chances to actually compute the baseline 2038 * total service time. Thus it reduces chances to execute the 2039 * limit-update algorithm and possibly raise the limit to more 2040 * than 1. 2041 */ 2042 if (bfq_bfqq_has_short_ttime(bfqq)) 2043 bfqq->inject_limit = 0; 2044 else 2045 bfqq->inject_limit = 1; 2046 2047 bfqq->decrease_time_jif = jiffies; 2048 } 2049 2050 static void bfq_update_io_intensity(struct bfq_queue *bfqq, u64 now_ns) 2051 { 2052 u64 tot_io_time = now_ns - bfqq->io_start_time; 2053 2054 if (RB_EMPTY_ROOT(&bfqq->sort_list) && bfqq->dispatched == 0) 2055 bfqq->tot_idle_time += 2056 now_ns - bfqq->ttime.last_end_request; 2057 2058 if (unlikely(bfq_bfqq_just_created(bfqq))) 2059 return; 2060 2061 /* 2062 * Must be busy for at least about 80% of the time to be 2063 * considered I/O bound. 2064 */ 2065 if (bfqq->tot_idle_time * 5 > tot_io_time) 2066 bfq_clear_bfqq_IO_bound(bfqq); 2067 else 2068 bfq_mark_bfqq_IO_bound(bfqq); 2069 2070 /* 2071 * Keep an observation window of at most 200 ms in the past 2072 * from now. 2073 */ 2074 if (tot_io_time > 200 * NSEC_PER_MSEC) { 2075 bfqq->io_start_time = now_ns - (tot_io_time>>1); 2076 bfqq->tot_idle_time >>= 1; 2077 } 2078 } 2079 2080 /* 2081 * Detect whether bfqq's I/O seems synchronized with that of some 2082 * other queue, i.e., whether bfqq, after remaining empty, happens to 2083 * receive new I/O only right after some I/O request of the other 2084 * queue has been completed. We call waker queue the other queue, and 2085 * we assume, for simplicity, that bfqq may have at most one waker 2086 * queue. 2087 * 2088 * A remarkable throughput boost can be reached by unconditionally 2089 * injecting the I/O of the waker queue, every time a new 2090 * bfq_dispatch_request happens to be invoked while I/O is being 2091 * plugged for bfqq. In addition to boosting throughput, this 2092 * unblocks bfqq's I/O, thereby improving bandwidth and latency for 2093 * bfqq. Note that these same results may be achieved with the general 2094 * injection mechanism, but less effectively. For details on this 2095 * aspect, see the comments on the choice of the queue for injection 2096 * in bfq_select_queue(). 2097 * 2098 * Turning back to the detection of a waker queue, a queue Q is deemed as a 2099 * waker queue for bfqq if, for three consecutive times, bfqq happens to become 2100 * non empty right after a request of Q has been completed within given 2101 * timeout. In this respect, even if bfqq is empty, we do not check for a waker 2102 * if it still has some in-flight I/O. In fact, in this case bfqq is actually 2103 * still being served by the drive, and may receive new I/O on the completion 2104 * of some of the in-flight requests. In particular, on the first time, Q is 2105 * tentatively set as a candidate waker queue, while on the third consecutive 2106 * time that Q is detected, the field waker_bfqq is set to Q, to confirm that Q 2107 * is a waker queue for bfqq. These detection steps are performed only if bfqq 2108 * has a long think time, so as to make it more likely that bfqq's I/O is 2109 * actually being blocked by a synchronization. This last filter, plus the 2110 * above three-times requirement and time limit for detection, make false 2111 * positives less likely. 2112 * 2113 * NOTE 2114 * 2115 * The sooner a waker queue is detected, the sooner throughput can be 2116 * boosted by injecting I/O from the waker queue. Fortunately, 2117 * detection is likely to be actually fast, for the following 2118 * reasons. While blocked by synchronization, bfqq has a long think 2119 * time. This implies that bfqq's inject limit is at least equal to 1 2120 * (see the comments in bfq_update_inject_limit()). So, thanks to 2121 * injection, the waker queue is likely to be served during the very 2122 * first I/O-plugging time interval for bfqq. This triggers the first 2123 * step of the detection mechanism. Thanks again to injection, the 2124 * candidate waker queue is then likely to be confirmed no later than 2125 * during the next I/O-plugging interval for bfqq. 2126 * 2127 * ISSUE 2128 * 2129 * On queue merging all waker information is lost. 2130 */ 2131 static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq, 2132 u64 now_ns) 2133 { 2134 char waker_name[MAX_BFQQ_NAME_LENGTH]; 2135 2136 if (!bfqd->last_completed_rq_bfqq || 2137 bfqd->last_completed_rq_bfqq == bfqq || 2138 bfq_bfqq_has_short_ttime(bfqq) || 2139 now_ns - bfqd->last_completion >= 4 * NSEC_PER_MSEC || 2140 bfqd->last_completed_rq_bfqq == &bfqd->oom_bfqq || 2141 bfqq == &bfqd->oom_bfqq) 2142 return; 2143 2144 /* 2145 * We reset waker detection logic also if too much time has passed 2146 * since the first detection. If wakeups are rare, pointless idling 2147 * doesn't hurt throughput that much. The condition below makes sure 2148 * we do not uselessly idle blocking waker in more than 1/64 cases. 2149 */ 2150 if (bfqd->last_completed_rq_bfqq != 2151 bfqq->tentative_waker_bfqq || 2152 now_ns > bfqq->waker_detection_started + 2153 128 * (u64)bfqd->bfq_slice_idle) { 2154 /* 2155 * First synchronization detected with a 2156 * candidate waker queue, or with a different 2157 * candidate waker queue from the current one. 2158 */ 2159 bfqq->tentative_waker_bfqq = 2160 bfqd->last_completed_rq_bfqq; 2161 bfqq->num_waker_detections = 1; 2162 bfqq->waker_detection_started = now_ns; 2163 bfq_bfqq_name(bfqq->tentative_waker_bfqq, waker_name, 2164 MAX_BFQQ_NAME_LENGTH); 2165 bfq_log_bfqq(bfqd, bfqq, "set tentative waker %s", waker_name); 2166 } else /* Same tentative waker queue detected again */ 2167 bfqq->num_waker_detections++; 2168 2169 if (bfqq->num_waker_detections == 3) { 2170 bfqq->waker_bfqq = bfqd->last_completed_rq_bfqq; 2171 bfqq->tentative_waker_bfqq = NULL; 2172 bfq_bfqq_name(bfqq->waker_bfqq, waker_name, 2173 MAX_BFQQ_NAME_LENGTH); 2174 bfq_log_bfqq(bfqd, bfqq, "set waker %s", waker_name); 2175 2176 /* 2177 * If the waker queue disappears, then 2178 * bfqq->waker_bfqq must be reset. To 2179 * this goal, we maintain in each 2180 * waker queue a list, woken_list, of 2181 * all the queues that reference the 2182 * waker queue through their 2183 * waker_bfqq pointer. When the waker 2184 * queue exits, the waker_bfqq pointer 2185 * of all the queues in the woken_list 2186 * is reset. 2187 * 2188 * In addition, if bfqq is already in 2189 * the woken_list of a waker queue, 2190 * then, before being inserted into 2191 * the woken_list of a new waker 2192 * queue, bfqq must be removed from 2193 * the woken_list of the old waker 2194 * queue. 2195 */ 2196 if (!hlist_unhashed(&bfqq->woken_list_node)) 2197 hlist_del_init(&bfqq->woken_list_node); 2198 hlist_add_head(&bfqq->woken_list_node, 2199 &bfqd->last_completed_rq_bfqq->woken_list); 2200 } 2201 } 2202 2203 static void bfq_add_request(struct request *rq) 2204 { 2205 struct bfq_queue *bfqq = RQ_BFQQ(rq); 2206 struct bfq_data *bfqd = bfqq->bfqd; 2207 struct request *next_rq, *prev; 2208 unsigned int old_wr_coeff = bfqq->wr_coeff; 2209 bool interactive = false; 2210 u64 now_ns = ktime_get_ns(); 2211 2212 bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq)); 2213 bfqq->queued[rq_is_sync(rq)]++; 2214 /* 2215 * Updating of 'bfqd->queued' is protected by 'bfqd->lock', however, it 2216 * may be read without holding the lock in bfq_has_work(). 2217 */ 2218 WRITE_ONCE(bfqd->queued, bfqd->queued + 1); 2219 2220 if (bfq_bfqq_sync(bfqq) && RQ_BIC(rq)->requests <= 1) { 2221 bfq_check_waker(bfqd, bfqq, now_ns); 2222 2223 /* 2224 * Periodically reset inject limit, to make sure that 2225 * the latter eventually drops in case workload 2226 * changes, see step (3) in the comments on 2227 * bfq_update_inject_limit(). 2228 */ 2229 if (time_is_before_eq_jiffies(bfqq->decrease_time_jif + 2230 msecs_to_jiffies(1000))) 2231 bfq_reset_inject_limit(bfqd, bfqq); 2232 2233 /* 2234 * The following conditions must hold to setup a new 2235 * sampling of total service time, and then a new 2236 * update of the inject limit: 2237 * - bfqq is in service, because the total service 2238 * time is evaluated only for the I/O requests of 2239 * the queues in service; 2240 * - this is the right occasion to compute or to 2241 * lower the baseline total service time, because 2242 * there are actually no requests in the drive, 2243 * or 2244 * the baseline total service time is available, and 2245 * this is the right occasion to compute the other 2246 * quantity needed to update the inject limit, i.e., 2247 * the total service time caused by the amount of 2248 * injection allowed by the current value of the 2249 * limit. It is the right occasion because injection 2250 * has actually been performed during the service 2251 * hole, and there are still in-flight requests, 2252 * which are very likely to be exactly the injected 2253 * requests, or part of them; 2254 * - the minimum interval for sampling the total 2255 * service time and updating the inject limit has 2256 * elapsed. 2257 */ 2258 if (bfqq == bfqd->in_service_queue && 2259 (bfqd->tot_rq_in_driver == 0 || 2260 (bfqq->last_serv_time_ns > 0 && 2261 bfqd->rqs_injected && bfqd->tot_rq_in_driver > 0)) && 2262 time_is_before_eq_jiffies(bfqq->decrease_time_jif + 2263 msecs_to_jiffies(10))) { 2264 bfqd->last_empty_occupied_ns = ktime_get_ns(); 2265 /* 2266 * Start the state machine for measuring the 2267 * total service time of rq: setting 2268 * wait_dispatch will cause bfqd->waited_rq to 2269 * be set when rq will be dispatched. 2270 */ 2271 bfqd->wait_dispatch = true; 2272 /* 2273 * If there is no I/O in service in the drive, 2274 * then possible injection occurred before the 2275 * arrival of rq will not affect the total 2276 * service time of rq. So the injection limit 2277 * must not be updated as a function of such 2278 * total service time, unless new injection 2279 * occurs before rq is completed. To have the 2280 * injection limit updated only in the latter 2281 * case, reset rqs_injected here (rqs_injected 2282 * will be set in case injection is performed 2283 * on bfqq before rq is completed). 2284 */ 2285 if (bfqd->tot_rq_in_driver == 0) 2286 bfqd->rqs_injected = false; 2287 } 2288 } 2289 2290 if (bfq_bfqq_sync(bfqq)) 2291 bfq_update_io_intensity(bfqq, now_ns); 2292 2293 elv_rb_add(&bfqq->sort_list, rq); 2294 2295 /* 2296 * Check if this request is a better next-serve candidate. 2297 */ 2298 prev = bfqq->next_rq; 2299 next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position); 2300 bfqq->next_rq = next_rq; 2301 2302 /* 2303 * Adjust priority tree position, if next_rq changes. 2304 * See comments on bfq_pos_tree_add_move() for the unlikely(). 2305 */ 2306 if (unlikely(!bfqd->nonrot_with_queueing && prev != bfqq->next_rq)) 2307 bfq_pos_tree_add_move(bfqd, bfqq); 2308 2309 if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */ 2310 bfq_bfqq_handle_idle_busy_switch(bfqd, bfqq, old_wr_coeff, 2311 rq, &interactive); 2312 else { 2313 if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) && 2314 time_is_before_jiffies( 2315 bfqq->last_wr_start_finish + 2316 bfqd->bfq_wr_min_inter_arr_async)) { 2317 bfqq->wr_coeff = bfqd->bfq_wr_coeff; 2318 bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); 2319 2320 bfqd->wr_busy_queues++; 2321 bfqq->entity.prio_changed = 1; 2322 } 2323 if (prev != bfqq->next_rq) 2324 bfq_updated_next_req(bfqd, bfqq); 2325 } 2326 2327 /* 2328 * Assign jiffies to last_wr_start_finish in the following 2329 * cases: 2330 * 2331 * . if bfqq is not going to be weight-raised, because, for 2332 * non weight-raised queues, last_wr_start_finish stores the 2333 * arrival time of the last request; as of now, this piece 2334 * of information is used only for deciding whether to 2335 * weight-raise async queues 2336 * 2337 * . if bfqq is not weight-raised, because, if bfqq is now 2338 * switching to weight-raised, then last_wr_start_finish 2339 * stores the time when weight-raising starts 2340 * 2341 * . if bfqq is interactive, because, regardless of whether 2342 * bfqq is currently weight-raised, the weight-raising 2343 * period must start or restart (this case is considered 2344 * separately because it is not detected by the above 2345 * conditions, if bfqq is already weight-raised) 2346 * 2347 * last_wr_start_finish has to be updated also if bfqq is soft 2348 * real-time, because the weight-raising period is constantly 2349 * restarted on idle-to-busy transitions for these queues, but 2350 * this is already done in bfq_bfqq_handle_idle_busy_switch if 2351 * needed. 2352 */ 2353 if (bfqd->low_latency && 2354 (old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive)) 2355 bfqq->last_wr_start_finish = jiffies; 2356 } 2357 2358 static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd, 2359 struct bio *bio, 2360 struct request_queue *q) 2361 { 2362 struct bfq_queue *bfqq = bfqd->bio_bfqq; 2363 2364 2365 if (bfqq) 2366 return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio)); 2367 2368 return NULL; 2369 } 2370 2371 static sector_t get_sdist(sector_t last_pos, struct request *rq) 2372 { 2373 if (last_pos) 2374 return abs(blk_rq_pos(rq) - last_pos); 2375 2376 return 0; 2377 } 2378 2379 static void bfq_remove_request(struct request_queue *q, 2380 struct request *rq) 2381 { 2382 struct bfq_queue *bfqq = RQ_BFQQ(rq); 2383 struct bfq_data *bfqd = bfqq->bfqd; 2384 const int sync = rq_is_sync(rq); 2385 2386 if (bfqq->next_rq == rq) { 2387 bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq); 2388 bfq_updated_next_req(bfqd, bfqq); 2389 } 2390 2391 if (rq->queuelist.prev != &rq->queuelist) 2392 list_del_init(&rq->queuelist); 2393 bfqq->queued[sync]--; 2394 /* 2395 * Updating of 'bfqd->queued' is protected by 'bfqd->lock', however, it 2396 * may be read without holding the lock in bfq_has_work(). 2397 */ 2398 WRITE_ONCE(bfqd->queued, bfqd->queued - 1); 2399 elv_rb_del(&bfqq->sort_list, rq); 2400 2401 elv_rqhash_del(q, rq); 2402 if (q->last_merge == rq) 2403 q->last_merge = NULL; 2404 2405 if (RB_EMPTY_ROOT(&bfqq->sort_list)) { 2406 bfqq->next_rq = NULL; 2407 2408 if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue) { 2409 bfq_del_bfqq_busy(bfqq, false); 2410 /* 2411 * bfqq emptied. In normal operation, when 2412 * bfqq is empty, bfqq->entity.service and 2413 * bfqq->entity.budget must contain, 2414 * respectively, the service received and the 2415 * budget used last time bfqq emptied. These 2416 * facts do not hold in this case, as at least 2417 * this last removal occurred while bfqq is 2418 * not in service. To avoid inconsistencies, 2419 * reset both bfqq->entity.service and 2420 * bfqq->entity.budget, if bfqq has still a 2421 * process that may issue I/O requests to it. 2422 */ 2423 bfqq->entity.budget = bfqq->entity.service = 0; 2424 } 2425 2426 /* 2427 * Remove queue from request-position tree as it is empty. 2428 */ 2429 if (bfqq->pos_root) { 2430 rb_erase(&bfqq->pos_node, bfqq->pos_root); 2431 bfqq->pos_root = NULL; 2432 } 2433 } else { 2434 /* see comments on bfq_pos_tree_add_move() for the unlikely() */ 2435 if (unlikely(!bfqd->nonrot_with_queueing)) 2436 bfq_pos_tree_add_move(bfqd, bfqq); 2437 } 2438 2439 if (rq->cmd_flags & REQ_META) 2440 bfqq->meta_pending--; 2441 2442 } 2443 2444 static bool bfq_bio_merge(struct request_queue *q, struct bio *bio, 2445 unsigned int nr_segs) 2446 { 2447 struct bfq_data *bfqd = q->elevator->elevator_data; 2448 struct request *free = NULL; 2449 /* 2450 * bfq_bic_lookup grabs the queue_lock: invoke it now and 2451 * store its return value for later use, to avoid nesting 2452 * queue_lock inside the bfqd->lock. We assume that the bic 2453 * returned by bfq_bic_lookup does not go away before 2454 * bfqd->lock is taken. 2455 */ 2456 struct bfq_io_cq *bic = bfq_bic_lookup(q); 2457 bool ret; 2458 2459 spin_lock_irq(&bfqd->lock); 2460 2461 if (bic) { 2462 /* 2463 * Make sure cgroup info is uptodate for current process before 2464 * considering the merge. 2465 */ 2466 bfq_bic_update_cgroup(bic, bio); 2467 2468 bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf), 2469 bfq_actuator_index(bfqd, bio)); 2470 } else { 2471 bfqd->bio_bfqq = NULL; 2472 } 2473 bfqd->bio_bic = bic; 2474 2475 ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free); 2476 2477 spin_unlock_irq(&bfqd->lock); 2478 if (free) 2479 blk_mq_free_request(free); 2480 2481 return ret; 2482 } 2483 2484 static int bfq_request_merge(struct request_queue *q, struct request **req, 2485 struct bio *bio) 2486 { 2487 struct bfq_data *bfqd = q->elevator->elevator_data; 2488 struct request *__rq; 2489 2490 __rq = bfq_find_rq_fmerge(bfqd, bio, q); 2491 if (__rq && elv_bio_merge_ok(__rq, bio)) { 2492 *req = __rq; 2493 2494 if (blk_discard_mergable(__rq)) 2495 return ELEVATOR_DISCARD_MERGE; 2496 return ELEVATOR_FRONT_MERGE; 2497 } 2498 2499 return ELEVATOR_NO_MERGE; 2500 } 2501 2502 static void bfq_request_merged(struct request_queue *q, struct request *req, 2503 enum elv_merge type) 2504 { 2505 if (type == ELEVATOR_FRONT_MERGE && 2506 rb_prev(&req->rb_node) && 2507 blk_rq_pos(req) < 2508 blk_rq_pos(container_of(rb_prev(&req->rb_node), 2509 struct request, rb_node))) { 2510 struct bfq_queue *bfqq = RQ_BFQQ(req); 2511 struct bfq_data *bfqd; 2512 struct request *prev, *next_rq; 2513 2514 if (!bfqq) 2515 return; 2516 2517 bfqd = bfqq->bfqd; 2518 2519 /* Reposition request in its sort_list */ 2520 elv_rb_del(&bfqq->sort_list, req); 2521 elv_rb_add(&bfqq->sort_list, req); 2522 2523 /* Choose next request to be served for bfqq */ 2524 prev = bfqq->next_rq; 2525 next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req, 2526 bfqd->last_position); 2527 bfqq->next_rq = next_rq; 2528 /* 2529 * If next_rq changes, update both the queue's budget to 2530 * fit the new request and the queue's position in its 2531 * rq_pos_tree. 2532 */ 2533 if (prev != bfqq->next_rq) { 2534 bfq_updated_next_req(bfqd, bfqq); 2535 /* 2536 * See comments on bfq_pos_tree_add_move() for 2537 * the unlikely(). 2538 */ 2539 if (unlikely(!bfqd->nonrot_with_queueing)) 2540 bfq_pos_tree_add_move(bfqd, bfqq); 2541 } 2542 } 2543 } 2544 2545 /* 2546 * This function is called to notify the scheduler that the requests 2547 * rq and 'next' have been merged, with 'next' going away. BFQ 2548 * exploits this hook to address the following issue: if 'next' has a 2549 * fifo_time lower that rq, then the fifo_time of rq must be set to 2550 * the value of 'next', to not forget the greater age of 'next'. 2551 * 2552 * NOTE: in this function we assume that rq is in a bfq_queue, basing 2553 * on that rq is picked from the hash table q->elevator->hash, which, 2554 * in its turn, is filled only with I/O requests present in 2555 * bfq_queues, while BFQ is in use for the request queue q. In fact, 2556 * the function that fills this hash table (elv_rqhash_add) is called 2557 * only by bfq_insert_request. 2558 */ 2559 static void bfq_requests_merged(struct request_queue *q, struct request *rq, 2560 struct request *next) 2561 { 2562 struct bfq_queue *bfqq = RQ_BFQQ(rq), 2563 *next_bfqq = RQ_BFQQ(next); 2564 2565 if (!bfqq) 2566 goto remove; 2567 2568 /* 2569 * If next and rq belong to the same bfq_queue and next is older 2570 * than rq, then reposition rq in the fifo (by substituting next 2571 * with rq). Otherwise, if next and rq belong to different 2572 * bfq_queues, never reposition rq: in fact, we would have to 2573 * reposition it with respect to next's position in its own fifo, 2574 * which would most certainly be too expensive with respect to 2575 * the benefits. 2576 */ 2577 if (bfqq == next_bfqq && 2578 !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) && 2579 next->fifo_time < rq->fifo_time) { 2580 list_del_init(&rq->queuelist); 2581 list_replace_init(&next->queuelist, &rq->queuelist); 2582 rq->fifo_time = next->fifo_time; 2583 } 2584 2585 if (bfqq->next_rq == next) 2586 bfqq->next_rq = rq; 2587 2588 bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags); 2589 remove: 2590 /* Merged request may be in the IO scheduler. Remove it. */ 2591 if (!RB_EMPTY_NODE(&next->rb_node)) { 2592 bfq_remove_request(next->q, next); 2593 if (next_bfqq) 2594 bfqg_stats_update_io_remove(bfqq_group(next_bfqq), 2595 next->cmd_flags); 2596 } 2597 } 2598 2599 /* Must be called with bfqq != NULL */ 2600 static void bfq_bfqq_end_wr(struct bfq_queue *bfqq) 2601 { 2602 /* 2603 * If bfqq has been enjoying interactive weight-raising, then 2604 * reset soft_rt_next_start. We do it for the following 2605 * reason. bfqq may have been conveying the I/O needed to load 2606 * a soft real-time application. Such an application actually 2607 * exhibits a soft real-time I/O pattern after it finishes 2608 * loading, and finally starts doing its job. But, if bfqq has 2609 * been receiving a lot of bandwidth so far (likely to happen 2610 * on a fast device), then soft_rt_next_start now contains a 2611 * high value that. So, without this reset, bfqq would be 2612 * prevented from being possibly considered as soft_rt for a 2613 * very long time. 2614 */ 2615 2616 if (bfqq->wr_cur_max_time != 2617 bfqq->bfqd->bfq_wr_rt_max_time) 2618 bfqq->soft_rt_next_start = jiffies; 2619 2620 if (bfq_bfqq_busy(bfqq)) 2621 bfqq->bfqd->wr_busy_queues--; 2622 bfqq->wr_coeff = 1; 2623 bfqq->wr_cur_max_time = 0; 2624 bfqq->last_wr_start_finish = jiffies; 2625 /* 2626 * Trigger a weight change on the next invocation of 2627 * __bfq_entity_update_weight_prio. 2628 */ 2629 bfqq->entity.prio_changed = 1; 2630 } 2631 2632 void bfq_end_wr_async_queues(struct bfq_data *bfqd, 2633 struct bfq_group *bfqg) 2634 { 2635 int i, j, k; 2636 2637 for (k = 0; k < bfqd->num_actuators; k++) { 2638 for (i = 0; i < 2; i++) 2639 for (j = 0; j < IOPRIO_NR_LEVELS; j++) 2640 if (bfqg->async_bfqq[i][j][k]) 2641 bfq_bfqq_end_wr(bfqg->async_bfqq[i][j][k]); 2642 if (bfqg->async_idle_bfqq[k]) 2643 bfq_bfqq_end_wr(bfqg->async_idle_bfqq[k]); 2644 } 2645 } 2646 2647 static void bfq_end_wr(struct bfq_data *bfqd) 2648 { 2649 struct bfq_queue *bfqq; 2650 int i; 2651 2652 spin_lock_irq(&bfqd->lock); 2653 2654 for (i = 0; i < bfqd->num_actuators; i++) { 2655 list_for_each_entry(bfqq, &bfqd->active_list[i], bfqq_list) 2656 bfq_bfqq_end_wr(bfqq); 2657 } 2658 list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) 2659 bfq_bfqq_end_wr(bfqq); 2660 bfq_end_wr_async(bfqd); 2661 2662 spin_unlock_irq(&bfqd->lock); 2663 } 2664 2665 static sector_t bfq_io_struct_pos(void *io_struct, bool request) 2666 { 2667 if (request) 2668 return blk_rq_pos(io_struct); 2669 else 2670 return ((struct bio *)io_struct)->bi_iter.bi_sector; 2671 } 2672 2673 static int bfq_rq_close_to_sector(void *io_struct, bool request, 2674 sector_t sector) 2675 { 2676 return abs(bfq_io_struct_pos(io_struct, request) - sector) <= 2677 BFQQ_CLOSE_THR; 2678 } 2679 2680 static struct bfq_queue *bfqq_find_close(struct bfq_data *bfqd, 2681 struct bfq_queue *bfqq, 2682 sector_t sector) 2683 { 2684 struct rb_root *root = &bfqq_group(bfqq)->rq_pos_tree; 2685 struct rb_node *parent, *node; 2686 struct bfq_queue *__bfqq; 2687 2688 if (RB_EMPTY_ROOT(root)) 2689 return NULL; 2690 2691 /* 2692 * First, if we find a request starting at the end of the last 2693 * request, choose it. 2694 */ 2695 __bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL); 2696 if (__bfqq) 2697 return __bfqq; 2698 2699 /* 2700 * If the exact sector wasn't found, the parent of the NULL leaf 2701 * will contain the closest sector (rq_pos_tree sorted by 2702 * next_request position). 2703 */ 2704 __bfqq = rb_entry(parent, struct bfq_queue, pos_node); 2705 if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector)) 2706 return __bfqq; 2707 2708 if (blk_rq_pos(__bfqq->next_rq) < sector) 2709 node = rb_next(&__bfqq->pos_node); 2710 else 2711 node = rb_prev(&__bfqq->pos_node); 2712 if (!node) 2713 return NULL; 2714 2715 __bfqq = rb_entry(node, struct bfq_queue, pos_node); 2716 if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector)) 2717 return __bfqq; 2718 2719 return NULL; 2720 } 2721 2722 static struct bfq_queue *bfq_find_close_cooperator(struct bfq_data *bfqd, 2723 struct bfq_queue *cur_bfqq, 2724 sector_t sector) 2725 { 2726 struct bfq_queue *bfqq; 2727 2728 /* 2729 * We shall notice if some of the queues are cooperating, 2730 * e.g., working closely on the same area of the device. In 2731 * that case, we can group them together and: 1) don't waste 2732 * time idling, and 2) serve the union of their requests in 2733 * the best possible order for throughput. 2734 */ 2735 bfqq = bfqq_find_close(bfqd, cur_bfqq, sector); 2736 if (!bfqq || bfqq == cur_bfqq) 2737 return NULL; 2738 2739 return bfqq; 2740 } 2741 2742 static struct bfq_queue * 2743 bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) 2744 { 2745 int process_refs, new_process_refs; 2746 struct bfq_queue *__bfqq; 2747 2748 /* 2749 * If there are no process references on the new_bfqq, then it is 2750 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain 2751 * may have dropped their last reference (not just their last process 2752 * reference). 2753 */ 2754 if (!bfqq_process_refs(new_bfqq)) 2755 return NULL; 2756 2757 /* Avoid a circular list and skip interim queue merges. */ 2758 while ((__bfqq = new_bfqq->new_bfqq)) { 2759 if (__bfqq == bfqq) 2760 return NULL; 2761 new_bfqq = __bfqq; 2762 } 2763 2764 process_refs = bfqq_process_refs(bfqq); 2765 new_process_refs = bfqq_process_refs(new_bfqq); 2766 /* 2767 * If the process for the bfqq has gone away, there is no 2768 * sense in merging the queues. 2769 */ 2770 if (process_refs == 0 || new_process_refs == 0) 2771 return NULL; 2772 2773 /* 2774 * Make sure merged queues belong to the same parent. Parents could 2775 * have changed since the time we decided the two queues are suitable 2776 * for merging. 2777 */ 2778 if (new_bfqq->entity.parent != bfqq->entity.parent) 2779 return NULL; 2780 2781 bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d", 2782 new_bfqq->pid); 2783 2784 /* 2785 * Merging is just a redirection: the requests of the process 2786 * owning one of the two queues are redirected to the other queue. 2787 * The latter queue, in its turn, is set as shared if this is the 2788 * first time that the requests of some process are redirected to 2789 * it. 2790 * 2791 * We redirect bfqq to new_bfqq and not the opposite, because 2792 * we are in the context of the process owning bfqq, thus we 2793 * have the io_cq of this process. So we can immediately 2794 * configure this io_cq to redirect the requests of the 2795 * process to new_bfqq. In contrast, the io_cq of new_bfqq is 2796 * not available any more (new_bfqq->bic == NULL). 2797 * 2798 * Anyway, even in case new_bfqq coincides with the in-service 2799 * queue, redirecting requests the in-service queue is the 2800 * best option, as we feed the in-service queue with new 2801 * requests close to the last request served and, by doing so, 2802 * are likely to increase the throughput. 2803 */ 2804 bfqq->new_bfqq = new_bfqq; 2805 /* 2806 * The above assignment schedules the following redirections: 2807 * each time some I/O for bfqq arrives, the process that 2808 * generated that I/O is disassociated from bfqq and 2809 * associated with new_bfqq. Here we increases new_bfqq->ref 2810 * in advance, adding the number of processes that are 2811 * expected to be associated with new_bfqq as they happen to 2812 * issue I/O. 2813 */ 2814 new_bfqq->ref += process_refs; 2815 return new_bfqq; 2816 } 2817 2818 static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq, 2819 struct bfq_queue *new_bfqq) 2820 { 2821 if (bfq_too_late_for_merging(new_bfqq)) 2822 return false; 2823 2824 if (bfq_class_idle(bfqq) || bfq_class_idle(new_bfqq) || 2825 (bfqq->ioprio_class != new_bfqq->ioprio_class)) 2826 return false; 2827 2828 /* 2829 * If either of the queues has already been detected as seeky, 2830 * then merging it with the other queue is unlikely to lead to 2831 * sequential I/O. 2832 */ 2833 if (BFQQ_SEEKY(bfqq) || BFQQ_SEEKY(new_bfqq)) 2834 return false; 2835 2836 /* 2837 * Interleaved I/O is known to be done by (some) applications 2838 * only for reads, so it does not make sense to merge async 2839 * queues. 2840 */ 2841 if (!bfq_bfqq_sync(bfqq) || !bfq_bfqq_sync(new_bfqq)) 2842 return false; 2843 2844 return true; 2845 } 2846 2847 static bool idling_boosts_thr_without_issues(struct bfq_data *bfqd, 2848 struct bfq_queue *bfqq); 2849 2850 static struct bfq_queue * 2851 bfq_setup_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq, 2852 struct bfq_queue *stable_merge_bfqq, 2853 struct bfq_iocq_bfqq_data *bfqq_data) 2854 { 2855 int proc_ref = min(bfqq_process_refs(bfqq), 2856 bfqq_process_refs(stable_merge_bfqq)); 2857 struct bfq_queue *new_bfqq; 2858 2859 if (idling_boosts_thr_without_issues(bfqd, bfqq) || 2860 proc_ref == 0) 2861 return NULL; 2862 2863 /* next function will take at least one ref */ 2864 new_bfqq = bfq_setup_merge(bfqq, stable_merge_bfqq); 2865 2866 if (new_bfqq) { 2867 bfqq_data->stably_merged = true; 2868 if (new_bfqq->bic) { 2869 unsigned int new_a_idx = new_bfqq->actuator_idx; 2870 struct bfq_iocq_bfqq_data *new_bfqq_data = 2871 &new_bfqq->bic->bfqq_data[new_a_idx]; 2872 2873 new_bfqq_data->stably_merged = true; 2874 } 2875 } 2876 return new_bfqq; 2877 } 2878 2879 /* 2880 * Attempt to schedule a merge of bfqq with the currently in-service 2881 * queue or with a close queue among the scheduled queues. Return 2882 * NULL if no merge was scheduled, a pointer to the shared bfq_queue 2883 * structure otherwise. 2884 * 2885 * The OOM queue is not allowed to participate to cooperation: in fact, since 2886 * the requests temporarily redirected to the OOM queue could be redirected 2887 * again to dedicated queues at any time, the state needed to correctly 2888 * handle merging with the OOM queue would be quite complex and expensive 2889 * to maintain. Besides, in such a critical condition as an out of memory, 2890 * the benefits of queue merging may be little relevant, or even negligible. 2891 * 2892 * WARNING: queue merging may impair fairness among non-weight raised 2893 * queues, for at least two reasons: 1) the original weight of a 2894 * merged queue may change during the merged state, 2) even being the 2895 * weight the same, a merged queue may be bloated with many more 2896 * requests than the ones produced by its originally-associated 2897 * process. 2898 */ 2899 static struct bfq_queue * 2900 bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, 2901 void *io_struct, bool request, struct bfq_io_cq *bic) 2902 { 2903 struct bfq_queue *in_service_bfqq, *new_bfqq; 2904 unsigned int a_idx = bfqq->actuator_idx; 2905 struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx]; 2906 2907 /* if a merge has already been setup, then proceed with that first */ 2908 if (bfqq->new_bfqq) 2909 return bfqq->new_bfqq; 2910 2911 /* 2912 * Check delayed stable merge for rotational or non-queueing 2913 * devs. For this branch to be executed, bfqq must not be 2914 * currently merged with some other queue (i.e., bfqq->bic 2915 * must be non null). If we considered also merged queues, 2916 * then we should also check whether bfqq has already been 2917 * merged with bic->stable_merge_bfqq. But this would be 2918 * costly and complicated. 2919 */ 2920 if (unlikely(!bfqd->nonrot_with_queueing)) { 2921 /* 2922 * Make sure also that bfqq is sync, because 2923 * bic->stable_merge_bfqq may point to some queue (for 2924 * stable merging) also if bic is associated with a 2925 * sync queue, but this bfqq is async 2926 */ 2927 if (bfq_bfqq_sync(bfqq) && bfqq_data->stable_merge_bfqq && 2928 !bfq_bfqq_just_created(bfqq) && 2929 time_is_before_jiffies(bfqq->split_time + 2930 msecs_to_jiffies(bfq_late_stable_merging)) && 2931 time_is_before_jiffies(bfqq->creation_time + 2932 msecs_to_jiffies(bfq_late_stable_merging))) { 2933 struct bfq_queue *stable_merge_bfqq = 2934 bfqq_data->stable_merge_bfqq; 2935 2936 /* deschedule stable merge, because done or aborted here */ 2937 bfq_put_stable_ref(stable_merge_bfqq); 2938 2939 bfqq_data->stable_merge_bfqq = NULL; 2940 2941 return bfq_setup_stable_merge(bfqd, bfqq, 2942 stable_merge_bfqq, 2943 bfqq_data); 2944 } 2945 } 2946 2947 /* 2948 * Do not perform queue merging if the device is non 2949 * rotational and performs internal queueing. In fact, such a 2950 * device reaches a high speed through internal parallelism 2951 * and pipelining. This means that, to reach a high 2952 * throughput, it must have many requests enqueued at the same 2953 * time. But, in this configuration, the internal scheduling 2954 * algorithm of the device does exactly the job of queue 2955 * merging: it reorders requests so as to obtain as much as 2956 * possible a sequential I/O pattern. As a consequence, with 2957 * the workload generated by processes doing interleaved I/O, 2958 * the throughput reached by the device is likely to be the 2959 * same, with and without queue merging. 2960 * 2961 * Disabling merging also provides a remarkable benefit in 2962 * terms of throughput. Merging tends to make many workloads 2963 * artificially more uneven, because of shared queues 2964 * remaining non empty for incomparably more time than 2965 * non-merged queues. This may accentuate workload 2966 * asymmetries. For example, if one of the queues in a set of 2967 * merged queues has a higher weight than a normal queue, then 2968 * the shared queue may inherit such a high weight and, by 2969 * staying almost always active, may force BFQ to perform I/O 2970 * plugging most of the time. This evidently makes it harder 2971 * for BFQ to let the device reach a high throughput. 2972 * 2973 * Finally, the likely() macro below is not used because one 2974 * of the two branches is more likely than the other, but to 2975 * have the code path after the following if() executed as 2976 * fast as possible for the case of a non rotational device 2977 * with queueing. We want it because this is the fastest kind 2978 * of device. On the opposite end, the likely() may lengthen 2979 * the execution time of BFQ for the case of slower devices 2980 * (rotational or at least without queueing). But in this case 2981 * the execution time of BFQ matters very little, if not at 2982 * all. 2983 */ 2984 if (likely(bfqd->nonrot_with_queueing)) 2985 return NULL; 2986 2987 /* 2988 * Prevent bfqq from being merged if it has been created too 2989 * long ago. The idea is that true cooperating processes, and 2990 * thus their associated bfq_queues, are supposed to be 2991 * created shortly after each other. This is the case, e.g., 2992 * for KVM/QEMU and dump I/O threads. Basing on this 2993 * assumption, the following filtering greatly reduces the 2994 * probability that two non-cooperating processes, which just 2995 * happen to do close I/O for some short time interval, have 2996 * their queues merged by mistake. 2997 */ 2998 if (bfq_too_late_for_merging(bfqq)) 2999 return NULL; 3000 3001 if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq)) 3002 return NULL; 3003 3004 /* If there is only one backlogged queue, don't search. */ 3005 if (bfq_tot_busy_queues(bfqd) == 1) 3006 return NULL; 3007 3008 in_service_bfqq = bfqd->in_service_queue; 3009 3010 if (in_service_bfqq && in_service_bfqq != bfqq && 3011 likely(in_service_bfqq != &bfqd->oom_bfqq) && 3012 bfq_rq_close_to_sector(io_struct, request, 3013 bfqd->in_serv_last_pos) && 3014 bfqq->entity.parent == in_service_bfqq->entity.parent && 3015 bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) { 3016 new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq); 3017 if (new_bfqq) 3018 return new_bfqq; 3019 } 3020 /* 3021 * Check whether there is a cooperator among currently scheduled 3022 * queues. The only thing we need is that the bio/request is not 3023 * NULL, as we need it to establish whether a cooperator exists. 3024 */ 3025 new_bfqq = bfq_find_close_cooperator(bfqd, bfqq, 3026 bfq_io_struct_pos(io_struct, request)); 3027 3028 if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq) && 3029 bfq_may_be_close_cooperator(bfqq, new_bfqq)) 3030 return bfq_setup_merge(bfqq, new_bfqq); 3031 3032 return NULL; 3033 } 3034 3035 static void bfq_bfqq_save_state(struct bfq_queue *bfqq) 3036 { 3037 struct bfq_io_cq *bic = bfqq->bic; 3038 unsigned int a_idx = bfqq->actuator_idx; 3039 struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[a_idx]; 3040 3041 /* 3042 * If !bfqq->bic, the queue is already shared or its requests 3043 * have already been redirected to a shared queue; both idle window 3044 * and weight raising state have already been saved. Do nothing. 3045 */ 3046 if (!bic) 3047 return; 3048 3049 bfqq_data->saved_last_serv_time_ns = bfqq->last_serv_time_ns; 3050 bfqq_data->saved_inject_limit = bfqq->inject_limit; 3051 bfqq_data->saved_decrease_time_jif = bfqq->decrease_time_jif; 3052 3053 bfqq_data->saved_weight = bfqq->entity.orig_weight; 3054 bfqq_data->saved_ttime = bfqq->ttime; 3055 bfqq_data->saved_has_short_ttime = 3056 bfq_bfqq_has_short_ttime(bfqq); 3057 bfqq_data->saved_IO_bound = bfq_bfqq_IO_bound(bfqq); 3058 bfqq_data->saved_io_start_time = bfqq->io_start_time; 3059 bfqq_data->saved_tot_idle_time = bfqq->tot_idle_time; 3060 bfqq_data->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq); 3061 bfqq_data->was_in_burst_list = 3062 !hlist_unhashed(&bfqq->burst_list_node); 3063 3064 if (unlikely(bfq_bfqq_just_created(bfqq) && 3065 !bfq_bfqq_in_large_burst(bfqq) && 3066 bfqq->bfqd->low_latency)) { 3067 /* 3068 * bfqq being merged right after being created: bfqq 3069 * would have deserved interactive weight raising, but 3070 * did not make it to be set in a weight-raised state, 3071 * because of this early merge. Store directly the 3072 * weight-raising state that would have been assigned 3073 * to bfqq, so that to avoid that bfqq unjustly fails 3074 * to enjoy weight raising if split soon. 3075 */ 3076 bfqq_data->saved_wr_coeff = bfqq->bfqd->bfq_wr_coeff; 3077 bfqq_data->saved_wr_start_at_switch_to_srt = 3078 bfq_smallest_from_now(); 3079 bfqq_data->saved_wr_cur_max_time = 3080 bfq_wr_duration(bfqq->bfqd); 3081 bfqq_data->saved_last_wr_start_finish = jiffies; 3082 } else { 3083 bfqq_data->saved_wr_coeff = bfqq->wr_coeff; 3084 bfqq_data->saved_wr_start_at_switch_to_srt = 3085 bfqq->wr_start_at_switch_to_srt; 3086 bfqq_data->saved_service_from_wr = 3087 bfqq->service_from_wr; 3088 bfqq_data->saved_last_wr_start_finish = 3089 bfqq->last_wr_start_finish; 3090 bfqq_data->saved_wr_cur_max_time = bfqq->wr_cur_max_time; 3091 } 3092 } 3093 3094 3095 static void 3096 bfq_reassign_last_bfqq(struct bfq_queue *cur_bfqq, struct bfq_queue *new_bfqq) 3097 { 3098 if (cur_bfqq->entity.parent && 3099 cur_bfqq->entity.parent->last_bfqq_created == cur_bfqq) 3100 cur_bfqq->entity.parent->last_bfqq_created = new_bfqq; 3101 else if (cur_bfqq->bfqd && cur_bfqq->bfqd->last_bfqq_created == cur_bfqq) 3102 cur_bfqq->bfqd->last_bfqq_created = new_bfqq; 3103 } 3104 3105 void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq) 3106 { 3107 /* 3108 * To prevent bfqq's service guarantees from being violated, 3109 * bfqq may be left busy, i.e., queued for service, even if 3110 * empty (see comments in __bfq_bfqq_expire() for 3111 * details). But, if no process will send requests to bfqq any 3112 * longer, then there is no point in keeping bfqq queued for 3113 * service. In addition, keeping bfqq queued for service, but 3114 * with no process ref any longer, may have caused bfqq to be 3115 * freed when dequeued from service. But this is assumed to 3116 * never happen. 3117 */ 3118 if (bfq_bfqq_busy(bfqq) && RB_EMPTY_ROOT(&bfqq->sort_list) && 3119 bfqq != bfqd->in_service_queue) 3120 bfq_del_bfqq_busy(bfqq, false); 3121 3122 bfq_reassign_last_bfqq(bfqq, NULL); 3123 3124 bfq_put_queue(bfqq); 3125 } 3126 3127 static void 3128 bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic, 3129 struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) 3130 { 3131 bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu", 3132 (unsigned long)new_bfqq->pid); 3133 /* Save weight raising and idle window of the merged queues */ 3134 bfq_bfqq_save_state(bfqq); 3135 bfq_bfqq_save_state(new_bfqq); 3136 if (bfq_bfqq_IO_bound(bfqq)) 3137 bfq_mark_bfqq_IO_bound(new_bfqq); 3138 bfq_clear_bfqq_IO_bound(bfqq); 3139 3140 /* 3141 * The processes associated with bfqq are cooperators of the 3142 * processes associated with new_bfqq. So, if bfqq has a 3143 * waker, then assume that all these processes will be happy 3144 * to let bfqq's waker freely inject I/O when they have no 3145 * I/O. 3146 */ 3147 if (bfqq->waker_bfqq && !new_bfqq->waker_bfqq && 3148 bfqq->waker_bfqq != new_bfqq) { 3149 new_bfqq->waker_bfqq = bfqq->waker_bfqq; 3150 new_bfqq->tentative_waker_bfqq = NULL; 3151 3152 /* 3153 * If the waker queue disappears, then 3154 * new_bfqq->waker_bfqq must be reset. So insert 3155 * new_bfqq into the woken_list of the waker. See 3156 * bfq_check_waker for details. 3157 */ 3158 hlist_add_head(&new_bfqq->woken_list_node, 3159 &new_bfqq->waker_bfqq->woken_list); 3160 3161 } 3162 3163 /* 3164 * If bfqq is weight-raised, then let new_bfqq inherit 3165 * weight-raising. To reduce false positives, neglect the case 3166 * where bfqq has just been created, but has not yet made it 3167 * to be weight-raised (which may happen because EQM may merge 3168 * bfqq even before bfq_add_request is executed for the first 3169 * time for bfqq). Handling this case would however be very 3170 * easy, thanks to the flag just_created. 3171 */ 3172 if (new_bfqq->wr_coeff == 1 && bfqq->wr_coeff > 1) { 3173 new_bfqq->wr_coeff = bfqq->wr_coeff; 3174 new_bfqq->wr_cur_max_time = bfqq->wr_cur_max_time; 3175 new_bfqq->last_wr_start_finish = bfqq->last_wr_start_finish; 3176 new_bfqq->wr_start_at_switch_to_srt = 3177 bfqq->wr_start_at_switch_to_srt; 3178 if (bfq_bfqq_busy(new_bfqq)) 3179 bfqd->wr_busy_queues++; 3180 new_bfqq->entity.prio_changed = 1; 3181 } 3182 3183 if (bfqq->wr_coeff > 1) { /* bfqq has given its wr to new_bfqq */ 3184 bfqq->wr_coeff = 1; 3185 bfqq->entity.prio_changed = 1; 3186 if (bfq_bfqq_busy(bfqq)) 3187 bfqd->wr_busy_queues--; 3188 } 3189 3190 bfq_log_bfqq(bfqd, new_bfqq, "merge_bfqqs: wr_busy %d", 3191 bfqd->wr_busy_queues); 3192 3193 /* 3194 * Merge queues (that is, let bic redirect its requests to new_bfqq) 3195 */ 3196 bic_set_bfqq(bic, new_bfqq, true, bfqq->actuator_idx); 3197 bfq_mark_bfqq_coop(new_bfqq); 3198 /* 3199 * new_bfqq now belongs to at least two bics (it is a shared queue): 3200 * set new_bfqq->bic to NULL. bfqq either: 3201 * - does not belong to any bic any more, and hence bfqq->bic must 3202 * be set to NULL, or 3203 * - is a queue whose owning bics have already been redirected to a 3204 * different queue, hence the queue is destined to not belong to 3205 * any bic soon and bfqq->bic is already NULL (therefore the next 3206 * assignment causes no harm). 3207 */ 3208 new_bfqq->bic = NULL; 3209 /* 3210 * If the queue is shared, the pid is the pid of one of the associated 3211 * processes. Which pid depends on the exact sequence of merge events 3212 * the queue underwent. So printing such a pid is useless and confusing 3213 * because it reports a random pid between those of the associated 3214 * processes. 3215 * We mark such a queue with a pid -1, and then print SHARED instead of 3216 * a pid in logging messages. 3217 */ 3218 new_bfqq->pid = -1; 3219 bfqq->bic = NULL; 3220 3221 bfq_reassign_last_bfqq(bfqq, new_bfqq); 3222 3223 bfq_release_process_ref(bfqd, bfqq); 3224 } 3225 3226 static bool bfq_allow_bio_merge(struct request_queue *q, struct request *rq, 3227 struct bio *bio) 3228 { 3229 struct bfq_data *bfqd = q->elevator->elevator_data; 3230 bool is_sync = op_is_sync(bio->bi_opf); 3231 struct bfq_queue *bfqq = bfqd->bio_bfqq, *new_bfqq; 3232 3233 /* 3234 * Disallow merge of a sync bio into an async request. 3235 */ 3236 if (is_sync && !rq_is_sync(rq)) 3237 return false; 3238 3239 /* 3240 * Lookup the bfqq that this bio will be queued with. Allow 3241 * merge only if rq is queued there. 3242 */ 3243 if (!bfqq) 3244 return false; 3245 3246 /* 3247 * We take advantage of this function to perform an early merge 3248 * of the queues of possible cooperating processes. 3249 */ 3250 new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false, bfqd->bio_bic); 3251 if (new_bfqq) { 3252 /* 3253 * bic still points to bfqq, then it has not yet been 3254 * redirected to some other bfq_queue, and a queue 3255 * merge between bfqq and new_bfqq can be safely 3256 * fulfilled, i.e., bic can be redirected to new_bfqq 3257 * and bfqq can be put. 3258 */ 3259 bfq_merge_bfqqs(bfqd, bfqd->bio_bic, bfqq, 3260 new_bfqq); 3261 /* 3262 * If we get here, bio will be queued into new_queue, 3263 * so use new_bfqq to decide whether bio and rq can be 3264 * merged. 3265 */ 3266 bfqq = new_bfqq; 3267 3268 /* 3269 * Change also bqfd->bio_bfqq, as 3270 * bfqd->bio_bic now points to new_bfqq, and 3271 * this function may be invoked again (and then may 3272 * use again bqfd->bio_bfqq). 3273 */ 3274 bfqd->bio_bfqq = bfqq; 3275 } 3276 3277 return bfqq == RQ_BFQQ(rq); 3278 } 3279 3280 /* 3281 * Set the maximum time for the in-service queue to consume its 3282 * budget. This prevents seeky processes from lowering the throughput. 3283 * In practice, a time-slice service scheme is used with seeky 3284 * processes. 3285 */ 3286 static void bfq_set_budget_timeout(struct bfq_data *bfqd, 3287 struct bfq_queue *bfqq) 3288 { 3289 unsigned int timeout_coeff; 3290 3291 if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time) 3292 timeout_coeff = 1; 3293 else 3294 timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight; 3295 3296 bfqd->last_budget_start = ktime_get(); 3297 3298 bfqq->budget_timeout = jiffies + 3299 bfqd->bfq_timeout * timeout_coeff; 3300 } 3301 3302 static void __bfq_set_in_service_queue(struct bfq_data *bfqd, 3303 struct bfq_queue *bfqq) 3304 { 3305 if (bfqq) { 3306 bfq_clear_bfqq_fifo_expire(bfqq); 3307 3308 bfqd->budgets_assigned = (bfqd->budgets_assigned * 7 + 256) / 8; 3309 3310 if (time_is_before_jiffies(bfqq->last_wr_start_finish) && 3311 bfqq->wr_coeff > 1 && 3312 bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && 3313 time_is_before_jiffies(bfqq->budget_timeout)) { 3314 /* 3315 * For soft real-time queues, move the start 3316 * of the weight-raising period forward by the 3317 * time the queue has not received any 3318 * service. Otherwise, a relatively long 3319 * service delay is likely to cause the 3320 * weight-raising period of the queue to end, 3321 * because of the short duration of the 3322 * weight-raising period of a soft real-time 3323 * queue. It is worth noting that this move 3324 * is not so dangerous for the other queues, 3325 * because soft real-time queues are not 3326 * greedy. 3327 * 3328 * To not add a further variable, we use the 3329 * overloaded field budget_timeout to 3330 * determine for how long the queue has not 3331 * received service, i.e., how much time has 3332 * elapsed since the queue expired. However, 3333 * this is a little imprecise, because 3334 * budget_timeout is set to jiffies if bfqq 3335 * not only expires, but also remains with no 3336 * request. 3337 */ 3338 if (time_after(bfqq->budget_timeout, 3339 bfqq->last_wr_start_finish)) 3340 bfqq->last_wr_start_finish += 3341 jiffies - bfqq->budget_timeout; 3342 else 3343 bfqq->last_wr_start_finish = jiffies; 3344 } 3345 3346 bfq_set_budget_timeout(bfqd, bfqq); 3347 bfq_log_bfqq(bfqd, bfqq, 3348 "set_in_service_queue, cur-budget = %d", 3349 bfqq->entity.budget); 3350 } 3351 3352 bfqd->in_service_queue = bfqq; 3353 bfqd->in_serv_last_pos = 0; 3354 } 3355 3356 /* 3357 * Get and set a new queue for service. 3358 */ 3359 static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd) 3360 { 3361 struct bfq_queue *bfqq = bfq_get_next_queue(bfqd); 3362 3363 __bfq_set_in_service_queue(bfqd, bfqq); 3364 return bfqq; 3365 } 3366 3367 static void bfq_arm_slice_timer(struct bfq_data *bfqd) 3368 { 3369 struct bfq_queue *bfqq = bfqd->in_service_queue; 3370 u32 sl; 3371 3372 bfq_mark_bfqq_wait_request(bfqq); 3373 3374 /* 3375 * We don't want to idle for seeks, but we do want to allow 3376 * fair distribution of slice time for a process doing back-to-back 3377 * seeks. So allow a little bit of time for him to submit a new rq. 3378 */ 3379 sl = bfqd->bfq_slice_idle; 3380 /* 3381 * Unless the queue is being weight-raised or the scenario is 3382 * asymmetric, grant only minimum idle time if the queue 3383 * is seeky. A long idling is preserved for a weight-raised 3384 * queue, or, more in general, in an asymmetric scenario, 3385 * because a long idling is needed for guaranteeing to a queue 3386 * its reserved share of the throughput (in particular, it is 3387 * needed if the queue has a higher weight than some other 3388 * queue). 3389 */ 3390 if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 && 3391 !bfq_asymmetric_scenario(bfqd, bfqq)) 3392 sl = min_t(u64, sl, BFQ_MIN_TT); 3393 else if (bfqq->wr_coeff > 1) 3394 sl = max_t(u32, sl, 20ULL * NSEC_PER_MSEC); 3395 3396 bfqd->last_idling_start = ktime_get(); 3397 bfqd->last_idling_start_jiffies = jiffies; 3398 3399 hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl), 3400 HRTIMER_MODE_REL); 3401 bfqg_stats_set_start_idle_time(bfqq_group(bfqq)); 3402 } 3403 3404 /* 3405 * In autotuning mode, max_budget is dynamically recomputed as the 3406 * amount of sectors transferred in timeout at the estimated peak 3407 * rate. This enables BFQ to utilize a full timeslice with a full 3408 * budget, even if the in-service queue is served at peak rate. And 3409 * this maximises throughput with sequential workloads. 3410 */ 3411 static unsigned long bfq_calc_max_budget(struct bfq_data *bfqd) 3412 { 3413 return (u64)bfqd->peak_rate * USEC_PER_MSEC * 3414 jiffies_to_msecs(bfqd->bfq_timeout)>>BFQ_RATE_SHIFT; 3415 } 3416 3417 /* 3418 * Update parameters related to throughput and responsiveness, as a 3419 * function of the estimated peak rate. See comments on 3420 * bfq_calc_max_budget(), and on the ref_wr_duration array. 3421 */ 3422 static void update_thr_responsiveness_params(struct bfq_data *bfqd) 3423 { 3424 if (bfqd->bfq_user_max_budget == 0) { 3425 bfqd->bfq_max_budget = 3426 bfq_calc_max_budget(bfqd); 3427 bfq_log(bfqd, "new max_budget = %d", bfqd->bfq_max_budget); 3428 } 3429 } 3430 3431 static void bfq_reset_rate_computation(struct bfq_data *bfqd, 3432 struct request *rq) 3433 { 3434 if (rq != NULL) { /* new rq dispatch now, reset accordingly */ 3435 bfqd->last_dispatch = bfqd->first_dispatch = ktime_get_ns(); 3436 bfqd->peak_rate_samples = 1; 3437 bfqd->sequential_samples = 0; 3438 bfqd->tot_sectors_dispatched = bfqd->last_rq_max_size = 3439 blk_rq_sectors(rq); 3440 } else /* no new rq dispatched, just reset the number of samples */ 3441 bfqd->peak_rate_samples = 0; /* full re-init on next disp. */ 3442 3443 bfq_log(bfqd, 3444 "reset_rate_computation at end, sample %u/%u tot_sects %llu", 3445 bfqd->peak_rate_samples, bfqd->sequential_samples, 3446 bfqd->tot_sectors_dispatched); 3447 } 3448 3449 static void bfq_update_rate_reset(struct bfq_data *bfqd, struct request *rq) 3450 { 3451 u32 rate, weight, divisor; 3452 3453 /* 3454 * For the convergence property to hold (see comments on 3455 * bfq_update_peak_rate()) and for the assessment to be 3456 * reliable, a minimum number of samples must be present, and 3457 * a minimum amount of time must have elapsed. If not so, do 3458 * not compute new rate. Just reset parameters, to get ready 3459 * for a new evaluation attempt. 3460 */ 3461 if (bfqd->peak_rate_samples < BFQ_RATE_MIN_SAMPLES || 3462 bfqd->delta_from_first < BFQ_RATE_MIN_INTERVAL) 3463 goto reset_computation; 3464 3465 /* 3466 * If a new request completion has occurred after last 3467 * dispatch, then, to approximate the rate at which requests 3468 * have been served by the device, it is more precise to 3469 * extend the observation interval to the last completion. 3470 */ 3471 bfqd->delta_from_first = 3472 max_t(u64, bfqd->delta_from_first, 3473 bfqd->last_completion - bfqd->first_dispatch); 3474 3475 /* 3476 * Rate computed in sects/usec, and not sects/nsec, for 3477 * precision issues. 3478 */ 3479 rate = div64_ul(bfqd->tot_sectors_dispatched<<BFQ_RATE_SHIFT, 3480 div_u64(bfqd->delta_from_first, NSEC_PER_USEC)); 3481 3482 /* 3483 * Peak rate not updated if: 3484 * - the percentage of sequential dispatches is below 3/4 of the 3485 * total, and rate is below the current estimated peak rate 3486 * - rate is unreasonably high (> 20M sectors/sec) 3487 */ 3488 if ((bfqd->sequential_samples < (3 * bfqd->peak_rate_samples)>>2 && 3489 rate <= bfqd->peak_rate) || 3490 rate > 20<<BFQ_RATE_SHIFT) 3491 goto reset_computation; 3492 3493 /* 3494 * We have to update the peak rate, at last! To this purpose, 3495 * we use a low-pass filter. We compute the smoothing constant 3496 * of the filter as a function of the 'weight' of the new 3497 * measured rate. 3498 * 3499 * As can be seen in next formulas, we define this weight as a 3500 * quantity proportional to how sequential the workload is, 3501 * and to how long the observation time interval is. 3502 * 3503 * The weight runs from 0 to 8. The maximum value of the 3504 * weight, 8, yields the minimum value for the smoothing 3505 * constant. At this minimum value for the smoothing constant, 3506 * the measured rate contributes for half of the next value of 3507 * the estimated peak rate. 3508 * 3509 * So, the first step is to compute the weight as a function 3510 * of how sequential the workload is. Note that the weight 3511 * cannot reach 9, because bfqd->sequential_samples cannot 3512 * become equal to bfqd->peak_rate_samples, which, in its 3513 * turn, holds true because bfqd->sequential_samples is not 3514 * incremented for the first sample. 3515 */ 3516 weight = (9 * bfqd->sequential_samples) / bfqd->peak_rate_samples; 3517 3518 /* 3519 * Second step: further refine the weight as a function of the 3520 * duration of the observation interval. 3521 */ 3522 weight = min_t(u32, 8, 3523 div_u64(weight * bfqd->delta_from_first, 3524 BFQ_RATE_REF_INTERVAL)); 3525 3526 /* 3527 * Divisor ranging from 10, for minimum weight, to 2, for 3528 * maximum weight. 3529 */ 3530 divisor = 10 - weight; 3531 3532 /* 3533 * Finally, update peak rate: 3534 * 3535 * peak_rate = peak_rate * (divisor-1) / divisor + rate / divisor 3536 */ 3537 bfqd->peak_rate *= divisor-1; 3538 bfqd->peak_rate /= divisor; 3539 rate /= divisor; /* smoothing constant alpha = 1/divisor */ 3540 3541 bfqd->peak_rate += rate; 3542 3543 /* 3544 * For a very slow device, bfqd->peak_rate can reach 0 (see 3545 * the minimum representable values reported in the comments 3546 * on BFQ_RATE_SHIFT). Push to 1 if this happens, to avoid 3547 * divisions by zero where bfqd->peak_rate is used as a 3548 * divisor. 3549 */ 3550 bfqd->peak_rate = max_t(u32, 1, bfqd->peak_rate); 3551 3552 update_thr_responsiveness_params(bfqd); 3553 3554 reset_computation: 3555 bfq_reset_rate_computation(bfqd, rq); 3556 } 3557 3558 /* 3559 * Update the read/write peak rate (the main quantity used for 3560 * auto-tuning, see update_thr_responsiveness_params()). 3561 * 3562 * It is not trivial to estimate the peak rate (correctly): because of 3563 * the presence of sw and hw queues between the scheduler and the 3564 * device components that finally serve I/O requests, it is hard to 3565 * say exactly when a given dispatched request is served inside the 3566 * device, and for how long. As a consequence, it is hard to know 3567 * precisely at what rate a given set of requests is actually served 3568 * by the device. 3569 * 3570 * On the opposite end, the dispatch time of any request is trivially 3571 * available, and, from this piece of information, the "dispatch rate" 3572 * of requests can be immediately computed. So, the idea in the next 3573 * function is to use what is known, namely request dispatch times 3574 * (plus, when useful, request completion times), to estimate what is 3575 * unknown, namely in-device request service rate. 3576 * 3577 * The main issue is that, because of the above facts, the rate at 3578 * which a certain set of requests is dispatched over a certain time 3579 * interval can vary greatly with respect to the rate at which the 3580 * same requests are then served. But, since the size of any 3581 * intermediate queue is limited, and the service scheme is lossless 3582 * (no request is silently dropped), the following obvious convergence 3583 * property holds: the number of requests dispatched MUST become 3584 * closer and closer to the number of requests completed as the 3585 * observation interval grows. This is the key property used in 3586 * the next function to estimate the peak service rate as a function 3587 * of the observed dispatch rate. The function assumes to be invoked 3588 * on every request dispatch. 3589 */ 3590 static void bfq_update_peak_rate(struct bfq_data *bfqd, struct request *rq) 3591 { 3592 u64 now_ns = ktime_get_ns(); 3593 3594 if (bfqd->peak_rate_samples == 0) { /* first dispatch */ 3595 bfq_log(bfqd, "update_peak_rate: goto reset, samples %d", 3596 bfqd->peak_rate_samples); 3597 bfq_reset_rate_computation(bfqd, rq); 3598 goto update_last_values; /* will add one sample */ 3599 } 3600 3601 /* 3602 * Device idle for very long: the observation interval lasting 3603 * up to this dispatch cannot be a valid observation interval 3604 * for computing a new peak rate (similarly to the late- 3605 * completion event in bfq_completed_request()). Go to 3606 * update_rate_and_reset to have the following three steps 3607 * taken: 3608 * - close the observation interval at the last (previous) 3609 * request dispatch or completion 3610 * - compute rate, if possible, for that observation interval 3611 * - start a new observation interval with this dispatch 3612 */ 3613 if (now_ns - bfqd->last_dispatch > 100*NSEC_PER_MSEC && 3614 bfqd->tot_rq_in_driver == 0) 3615 goto update_rate_and_reset; 3616 3617 /* Update sampling information */ 3618 bfqd->peak_rate_samples++; 3619 3620 if ((bfqd->tot_rq_in_driver > 0 || 3621 now_ns - bfqd->last_completion < BFQ_MIN_TT) 3622 && !BFQ_RQ_SEEKY(bfqd, bfqd->last_position, rq)) 3623 bfqd->sequential_samples++; 3624 3625 bfqd->tot_sectors_dispatched += blk_rq_sectors(rq); 3626 3627 /* Reset max observed rq size every 32 dispatches */ 3628 if (likely(bfqd->peak_rate_samples % 32)) 3629 bfqd->last_rq_max_size = 3630 max_t(u32, blk_rq_sectors(rq), bfqd->last_rq_max_size); 3631 else 3632 bfqd->last_rq_max_size = blk_rq_sectors(rq); 3633 3634 bfqd->delta_from_first = now_ns - bfqd->first_dispatch; 3635 3636 /* Target observation interval not yet reached, go on sampling */ 3637 if (bfqd->delta_from_first < BFQ_RATE_REF_INTERVAL) 3638 goto update_last_values; 3639 3640 update_rate_and_reset: 3641 bfq_update_rate_reset(bfqd, rq); 3642 update_last_values: 3643 bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq); 3644 if (RQ_BFQQ(rq) == bfqd->in_service_queue) 3645 bfqd->in_serv_last_pos = bfqd->last_position; 3646 bfqd->last_dispatch = now_ns; 3647 } 3648 3649 /* 3650 * Remove request from internal lists. 3651 */ 3652 static void bfq_dispatch_remove(struct request_queue *q, struct request *rq) 3653 { 3654 struct bfq_queue *bfqq = RQ_BFQQ(rq); 3655 3656 /* 3657 * For consistency, the next instruction should have been 3658 * executed after removing the request from the queue and 3659 * dispatching it. We execute instead this instruction before 3660 * bfq_remove_request() (and hence introduce a temporary 3661 * inconsistency), for efficiency. In fact, should this 3662 * dispatch occur for a non in-service bfqq, this anticipated 3663 * increment prevents two counters related to bfqq->dispatched 3664 * from risking to be, first, uselessly decremented, and then 3665 * incremented again when the (new) value of bfqq->dispatched 3666 * happens to be taken into account. 3667 */ 3668 bfqq->dispatched++; 3669 bfq_update_peak_rate(q->elevator->elevator_data, rq); 3670 3671 bfq_remove_request(q, rq); 3672 } 3673 3674 /* 3675 * There is a case where idling does not have to be performed for 3676 * throughput concerns, but to preserve the throughput share of 3677 * the process associated with bfqq. 3678 * 3679 * To introduce this case, we can note that allowing the drive 3680 * to enqueue more than one request at a time, and hence 3681 * delegating de facto final scheduling decisions to the 3682 * drive's internal scheduler, entails loss of control on the 3683 * actual request service order. In particular, the critical 3684 * situation is when requests from different processes happen 3685 * to be present, at the same time, in the internal queue(s) 3686 * of the drive. In such a situation, the drive, by deciding 3687 * the service order of the internally-queued requests, does 3688 * determine also the actual throughput distribution among 3689 * these processes. But the drive typically has no notion or 3690 * concern about per-process throughput distribution, and 3691 * makes its decisions only on a per-request basis. Therefore, 3692 * the service distribution enforced by the drive's internal 3693 * scheduler is likely to coincide with the desired throughput 3694 * distribution only in a completely symmetric, or favorably 3695 * skewed scenario where: 3696 * (i-a) each of these processes must get the same throughput as 3697 * the others, 3698 * (i-b) in case (i-a) does not hold, it holds that the process 3699 * associated with bfqq must receive a lower or equal 3700 * throughput than any of the other processes; 3701 * (ii) the I/O of each process has the same properties, in 3702 * terms of locality (sequential or random), direction 3703 * (reads or writes), request sizes, greediness 3704 * (from I/O-bound to sporadic), and so on; 3705 3706 * In fact, in such a scenario, the drive tends to treat the requests 3707 * of each process in about the same way as the requests of the 3708 * others, and thus to provide each of these processes with about the 3709 * same throughput. This is exactly the desired throughput 3710 * distribution if (i-a) holds, or, if (i-b) holds instead, this is an 3711 * even more convenient distribution for (the process associated with) 3712 * bfqq. 3713 * 3714 * In contrast, in any asymmetric or unfavorable scenario, device 3715 * idling (I/O-dispatch plugging) is certainly needed to guarantee 3716 * that bfqq receives its assigned fraction of the device throughput 3717 * (see [1] for details). 3718 * 3719 * The problem is that idling may significantly reduce throughput with 3720 * certain combinations of types of I/O and devices. An important 3721 * example is sync random I/O on flash storage with command 3722 * queueing. So, unless bfqq falls in cases where idling also boosts 3723 * throughput, it is important to check conditions (i-a), i(-b) and 3724 * (ii) accurately, so as to avoid idling when not strictly needed for 3725 * service guarantees. 3726 * 3727 * Unfortunately, it is extremely difficult to thoroughly check 3728 * condition (ii). And, in case there are active groups, it becomes 3729 * very difficult to check conditions (i-a) and (i-b) too. In fact, 3730 * if there are active groups, then, for conditions (i-a) or (i-b) to 3731 * become false 'indirectly', it is enough that an active group 3732 * contains more active processes or sub-groups than some other active 3733 * group. More precisely, for conditions (i-a) or (i-b) to become 3734 * false because of such a group, it is not even necessary that the 3735 * group is (still) active: it is sufficient that, even if the group 3736 * has become inactive, some of its descendant processes still have 3737 * some request already dispatched but still waiting for 3738 * completion. In fact, requests have still to be guaranteed their 3739 * share of the throughput even after being dispatched. In this 3740 * respect, it is easy to show that, if a group frequently becomes 3741 * inactive while still having in-flight requests, and if, when this 3742 * happens, the group is not considered in the calculation of whether 3743 * the scenario is asymmetric, then the group may fail to be 3744 * guaranteed its fair share of the throughput (basically because 3745 * idling may not be performed for the descendant processes of the 3746 * group, but it had to be). We address this issue with the following 3747 * bi-modal behavior, implemented in the function 3748 * bfq_asymmetric_scenario(). 3749 * 3750 * If there are groups with requests waiting for completion 3751 * (as commented above, some of these groups may even be 3752 * already inactive), then the scenario is tagged as 3753 * asymmetric, conservatively, without checking any of the 3754 * conditions (i-a), (i-b) or (ii). So the device is idled for bfqq. 3755 * This behavior matches also the fact that groups are created 3756 * exactly if controlling I/O is a primary concern (to 3757 * preserve bandwidth and latency guarantees). 3758 * 3759 * On the opposite end, if there are no groups with requests waiting 3760 * for completion, then only conditions (i-a) and (i-b) are actually 3761 * controlled, i.e., provided that conditions (i-a) or (i-b) holds, 3762 * idling is not performed, regardless of whether condition (ii) 3763 * holds. In other words, only if conditions (i-a) and (i-b) do not 3764 * hold, then idling is allowed, and the device tends to be prevented 3765 * from queueing many requests, possibly of several processes. Since 3766 * there are no groups with requests waiting for completion, then, to 3767 * control conditions (i-a) and (i-b) it is enough to check just 3768 * whether all the queues with requests waiting for completion also 3769 * have the same weight. 3770 * 3771 * Not checking condition (ii) evidently exposes bfqq to the 3772 * risk of getting less throughput than its fair share. 3773 * However, for queues with the same weight, a further 3774 * mechanism, preemption, mitigates or even eliminates this 3775 * problem. And it does so without consequences on overall 3776 * throughput. This mechanism and its benefits are explained 3777 * in the next three paragraphs. 3778 * 3779 * Even if a queue, say Q, is expired when it remains idle, Q 3780 * can still preempt the new in-service queue if the next 3781 * request of Q arrives soon (see the comments on 3782 * bfq_bfqq_update_budg_for_activation). If all queues and 3783 * groups have the same weight, this form of preemption, 3784 * combined with the hole-recovery heuristic described in the 3785 * comments on function bfq_bfqq_update_budg_for_activation, 3786 * are enough to preserve a correct bandwidth distribution in 3787 * the mid term, even without idling. In fact, even if not 3788 * idling allows the internal queues of the device to contain 3789 * many requests, and thus to reorder requests, we can rather 3790 * safely assume that the internal scheduler still preserves a 3791 * minimum of mid-term fairness. 3792 * 3793 * More precisely, this preemption-based, idleless approach 3794 * provides fairness in terms of IOPS, and not sectors per 3795 * second. This can be seen with a simple example. Suppose 3796 * that there are two queues with the same weight, but that 3797 * the first queue receives requests of 8 sectors, while the 3798 * second queue receives requests of 1024 sectors. In 3799 * addition, suppose that each of the two queues contains at 3800 * most one request at a time, which implies that each queue 3801 * always remains idle after it is served. Finally, after 3802 * remaining idle, each queue receives very quickly a new 3803 * request. It follows that the two queues are served 3804 * alternatively, preempting each other if needed. This 3805 * implies that, although both queues have the same weight, 3806 * the queue with large requests receives a service that is 3807 * 1024/8 times as high as the service received by the other 3808 * queue. 3809 * 3810 * The motivation for using preemption instead of idling (for 3811 * queues with the same weight) is that, by not idling, 3812 * service guarantees are preserved (completely or at least in 3813 * part) without minimally sacrificing throughput. And, if 3814 * there is no active group, then the primary expectation for 3815 * this device is probably a high throughput. 3816 * 3817 * We are now left only with explaining the two sub-conditions in the 3818 * additional compound condition that is checked below for deciding 3819 * whether the scenario is asymmetric. To explain the first 3820 * sub-condition, we need to add that the function 3821 * bfq_asymmetric_scenario checks the weights of only 3822 * non-weight-raised queues, for efficiency reasons (see comments on 3823 * bfq_weights_tree_add()). Then the fact that bfqq is weight-raised 3824 * is checked explicitly here. More precisely, the compound condition 3825 * below takes into account also the fact that, even if bfqq is being 3826 * weight-raised, the scenario is still symmetric if all queues with 3827 * requests waiting for completion happen to be 3828 * weight-raised. Actually, we should be even more precise here, and 3829 * differentiate between interactive weight raising and soft real-time 3830 * weight raising. 3831 * 3832 * The second sub-condition checked in the compound condition is 3833 * whether there is a fair amount of already in-flight I/O not 3834 * belonging to bfqq. If so, I/O dispatching is to be plugged, for the 3835 * following reason. The drive may decide to serve in-flight 3836 * non-bfqq's I/O requests before bfqq's ones, thereby delaying the 3837 * arrival of new I/O requests for bfqq (recall that bfqq is sync). If 3838 * I/O-dispatching is not plugged, then, while bfqq remains empty, a 3839 * basically uncontrolled amount of I/O from other queues may be 3840 * dispatched too, possibly causing the service of bfqq's I/O to be 3841 * delayed even longer in the drive. This problem gets more and more 3842 * serious as the speed and the queue depth of the drive grow, 3843 * because, as these two quantities grow, the probability to find no 3844 * queue busy but many requests in flight grows too. By contrast, 3845 * plugging I/O dispatching minimizes the delay induced by already 3846 * in-flight I/O, and enables bfqq to recover the bandwidth it may 3847 * lose because of this delay. 3848 * 3849 * As a side note, it is worth considering that the above 3850 * device-idling countermeasures may however fail in the following 3851 * unlucky scenario: if I/O-dispatch plugging is (correctly) disabled 3852 * in a time period during which all symmetry sub-conditions hold, and 3853 * therefore the device is allowed to enqueue many requests, but at 3854 * some later point in time some sub-condition stops to hold, then it 3855 * may become impossible to make requests be served in the desired 3856 * order until all the requests already queued in the device have been 3857 * served. The last sub-condition commented above somewhat mitigates 3858 * this problem for weight-raised queues. 3859 * 3860 * However, as an additional mitigation for this problem, we preserve 3861 * plugging for a special symmetric case that may suddenly turn into 3862 * asymmetric: the case where only bfqq is busy. In this case, not 3863 * expiring bfqq does not cause any harm to any other queues in terms 3864 * of service guarantees. In contrast, it avoids the following unlucky 3865 * sequence of events: (1) bfqq is expired, (2) a new queue with a 3866 * lower weight than bfqq becomes busy (or more queues), (3) the new 3867 * queue is served until a new request arrives for bfqq, (4) when bfqq 3868 * is finally served, there are so many requests of the new queue in 3869 * the drive that the pending requests for bfqq take a lot of time to 3870 * be served. In particular, event (2) may case even already 3871 * dispatched requests of bfqq to be delayed, inside the drive. So, to 3872 * avoid this series of events, the scenario is preventively declared 3873 * as asymmetric also if bfqq is the only busy queues 3874 */ 3875 static bool idling_needed_for_service_guarantees(struct bfq_data *bfqd, 3876 struct bfq_queue *bfqq) 3877 { 3878 int tot_busy_queues = bfq_tot_busy_queues(bfqd); 3879 3880 /* No point in idling for bfqq if it won't get requests any longer */ 3881 if (unlikely(!bfqq_process_refs(bfqq))) 3882 return false; 3883 3884 return (bfqq->wr_coeff > 1 && 3885 (bfqd->wr_busy_queues < tot_busy_queues || 3886 bfqd->tot_rq_in_driver >= bfqq->dispatched + 4)) || 3887 bfq_asymmetric_scenario(bfqd, bfqq) || 3888 tot_busy_queues == 1; 3889 } 3890 3891 static bool __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, 3892 enum bfqq_expiration reason) 3893 { 3894 /* 3895 * If this bfqq is shared between multiple processes, check 3896 * to make sure that those processes are still issuing I/Os 3897 * within the mean seek distance. If not, it may be time to 3898 * break the queues apart again. 3899 */ 3900 if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq)) 3901 bfq_mark_bfqq_split_coop(bfqq); 3902 3903 /* 3904 * Consider queues with a higher finish virtual time than 3905 * bfqq. If idling_needed_for_service_guarantees(bfqq) returns 3906 * true, then bfqq's bandwidth would be violated if an 3907 * uncontrolled amount of I/O from these queues were 3908 * dispatched while bfqq is waiting for its new I/O to 3909 * arrive. This is exactly what may happen if this is a forced 3910 * expiration caused by a preemption attempt, and if bfqq is 3911 * not re-scheduled. To prevent this from happening, re-queue 3912 * bfqq if it needs I/O-dispatch plugging, even if it is 3913 * empty. By doing so, bfqq is granted to be served before the 3914 * above queues (provided that bfqq is of course eligible). 3915 */ 3916 if (RB_EMPTY_ROOT(&bfqq->sort_list) && 3917 !(reason == BFQQE_PREEMPTED && 3918 idling_needed_for_service_guarantees(bfqd, bfqq))) { 3919 if (bfqq->dispatched == 0) 3920 /* 3921 * Overloading budget_timeout field to store 3922 * the time at which the queue remains with no 3923 * backlog and no outstanding request; used by 3924 * the weight-raising mechanism. 3925 */ 3926 bfqq->budget_timeout = jiffies; 3927 3928 bfq_del_bfqq_busy(bfqq, true); 3929 } else { 3930 bfq_requeue_bfqq(bfqd, bfqq, true); 3931 /* 3932 * Resort priority tree of potential close cooperators. 3933 * See comments on bfq_pos_tree_add_move() for the unlikely(). 3934 */ 3935 if (unlikely(!bfqd->nonrot_with_queueing && 3936 !RB_EMPTY_ROOT(&bfqq->sort_list))) 3937 bfq_pos_tree_add_move(bfqd, bfqq); 3938 } 3939 3940 /* 3941 * All in-service entities must have been properly deactivated 3942 * or requeued before executing the next function, which 3943 * resets all in-service entities as no more in service. This 3944 * may cause bfqq to be freed. If this happens, the next 3945 * function returns true. 3946 */ 3947 return __bfq_bfqd_reset_in_service(bfqd); 3948 } 3949 3950 /** 3951 * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior. 3952 * @bfqd: device data. 3953 * @bfqq: queue to update. 3954 * @reason: reason for expiration. 3955 * 3956 * Handle the feedback on @bfqq budget at queue expiration. 3957 * See the body for detailed comments. 3958 */ 3959 static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd, 3960 struct bfq_queue *bfqq, 3961 enum bfqq_expiration reason) 3962 { 3963 struct request *next_rq; 3964 int budget, min_budget; 3965 3966 min_budget = bfq_min_budget(bfqd); 3967 3968 if (bfqq->wr_coeff == 1) 3969 budget = bfqq->max_budget; 3970 else /* 3971 * Use a constant, low budget for weight-raised queues, 3972 * to help achieve a low latency. Keep it slightly higher 3973 * than the minimum possible budget, to cause a little 3974 * bit fewer expirations. 3975 */ 3976 budget = 2 * min_budget; 3977 3978 bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %d, budg left %d", 3979 bfqq->entity.budget, bfq_bfqq_budget_left(bfqq)); 3980 bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %d, min budg %d", 3981 budget, bfq_min_budget(bfqd)); 3982 bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d", 3983 bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue)); 3984 3985 if (bfq_bfqq_sync(bfqq) && bfqq->wr_coeff == 1) { 3986 switch (reason) { 3987 /* 3988 * Caveat: in all the following cases we trade latency 3989 * for throughput. 3990 */ 3991 case BFQQE_TOO_IDLE: 3992 /* 3993 * This is the only case where we may reduce 3994 * the budget: if there is no request of the 3995 * process still waiting for completion, then 3996 * we assume (tentatively) that the timer has 3997 * expired because the batch of requests of 3998 * the process could have been served with a 3999 * smaller budget. Hence, betting that 4000 * process will behave in the same way when it 4001 * becomes backlogged again, we reduce its 4002 * next budget. As long as we guess right, 4003 * this budget cut reduces the latency 4004 * experienced by the process. 4005 * 4006 * However, if there are still outstanding 4007 * requests, then the process may have not yet 4008 * issued its next request just because it is 4009 * still waiting for the completion of some of 4010 * the still outstanding ones. So in this 4011 * subcase we do not reduce its budget, on the 4012 * contrary we increase it to possibly boost 4013 * the throughput, as discussed in the 4014 * comments to the BUDGET_TIMEOUT case. 4015 */ 4016 if (bfqq->dispatched > 0) /* still outstanding reqs */ 4017 budget = min(budget * 2, bfqd->bfq_max_budget); 4018 else { 4019 if (budget > 5 * min_budget) 4020 budget -= 4 * min_budget; 4021 else 4022 budget = min_budget; 4023 } 4024 break; 4025 case BFQQE_BUDGET_TIMEOUT: 4026 /* 4027 * We double the budget here because it gives 4028 * the chance to boost the throughput if this 4029 * is not a seeky process (and has bumped into 4030 * this timeout because of, e.g., ZBR). 4031 */ 4032 budget = min(budget * 2, bfqd->bfq_max_budget); 4033 break; 4034 case BFQQE_BUDGET_EXHAUSTED: 4035 /* 4036 * The process still has backlog, and did not 4037 * let either the budget timeout or the disk 4038 * idling timeout expire. Hence it is not 4039 * seeky, has a short thinktime and may be 4040 * happy with a higher budget too. So 4041 * definitely increase the budget of this good 4042 * candidate to boost the disk throughput. 4043 */ 4044 budget = min(budget * 4, bfqd->bfq_max_budget); 4045 break; 4046 case BFQQE_NO_MORE_REQUESTS: 4047 /* 4048 * For queues that expire for this reason, it 4049 * is particularly important to keep the 4050 * budget close to the actual service they 4051 * need. Doing so reduces the timestamp 4052 * misalignment problem described in the 4053 * comments in the body of 4054 * __bfq_activate_entity. In fact, suppose 4055 * that a queue systematically expires for 4056 * BFQQE_NO_MORE_REQUESTS and presents a 4057 * new request in time to enjoy timestamp 4058 * back-shifting. The larger the budget of the 4059 * queue is with respect to the service the 4060 * queue actually requests in each service 4061 * slot, the more times the queue can be 4062 * reactivated with the same virtual finish 4063 * time. It follows that, even if this finish 4064 * time is pushed to the system virtual time 4065 * to reduce the consequent timestamp 4066 * misalignment, the queue unjustly enjoys for 4067 * many re-activations a lower finish time 4068 * than all newly activated queues. 4069 * 4070 * The service needed by bfqq is measured 4071 * quite precisely by bfqq->entity.service. 4072 * Since bfqq does not enjoy device idling, 4073 * bfqq->entity.service is equal to the number 4074 * of sectors that the process associated with 4075 * bfqq requested to read/write before waiting 4076 * for request completions, or blocking for 4077 * other reasons. 4078 */ 4079 budget = max_t(int, bfqq->entity.service, min_budget); 4080 break; 4081 default: 4082 return; 4083 } 4084 } else if (!bfq_bfqq_sync(bfqq)) { 4085 /* 4086 * Async queues get always the maximum possible 4087 * budget, as for them we do not care about latency 4088 * (in addition, their ability to dispatch is limited 4089 * by the charging factor). 4090 */ 4091 budget = bfqd->bfq_max_budget; 4092 } 4093 4094 bfqq->max_budget = budget; 4095 4096 if (bfqd->budgets_assigned >= bfq_stats_min_budgets && 4097 !bfqd->bfq_user_max_budget) 4098 bfqq->max_budget = min(bfqq->max_budget, bfqd->bfq_max_budget); 4099 4100 /* 4101 * If there is still backlog, then assign a new budget, making 4102 * sure that it is large enough for the next request. Since 4103 * the finish time of bfqq must be kept in sync with the 4104 * budget, be sure to call __bfq_bfqq_expire() *after* this 4105 * update. 4106 * 4107 * If there is no backlog, then no need to update the budget; 4108 * it will be updated on the arrival of a new request. 4109 */ 4110 next_rq = bfqq->next_rq; 4111 if (next_rq) 4112 bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget, 4113 bfq_serv_to_charge(next_rq, bfqq)); 4114 4115 bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %d", 4116 next_rq ? blk_rq_sectors(next_rq) : 0, 4117 bfqq->entity.budget); 4118 } 4119 4120 /* 4121 * Return true if the process associated with bfqq is "slow". The slow 4122 * flag is used, in addition to the budget timeout, to reduce the 4123 * amount of service provided to seeky processes, and thus reduce 4124 * their chances to lower the throughput. More details in the comments 4125 * on the function bfq_bfqq_expire(). 4126 * 4127 * An important observation is in order: as discussed in the comments 4128 * on the function bfq_update_peak_rate(), with devices with internal 4129 * queues, it is hard if ever possible to know when and for how long 4130 * an I/O request is processed by the device (apart from the trivial 4131 * I/O pattern where a new request is dispatched only after the 4132 * previous one has been completed). This makes it hard to evaluate 4133 * the real rate at which the I/O requests of each bfq_queue are 4134 * served. In fact, for an I/O scheduler like BFQ, serving a 4135 * bfq_queue means just dispatching its requests during its service 4136 * slot (i.e., until the budget of the queue is exhausted, or the 4137 * queue remains idle, or, finally, a timeout fires). But, during the 4138 * service slot of a bfq_queue, around 100 ms at most, the device may 4139 * be even still processing requests of bfq_queues served in previous 4140 * service slots. On the opposite end, the requests of the in-service 4141 * bfq_queue may be completed after the service slot of the queue 4142 * finishes. 4143 * 4144 * Anyway, unless more sophisticated solutions are used 4145 * (where possible), the sum of the sizes of the requests dispatched 4146 * during the service slot of a bfq_queue is probably the only 4147 * approximation available for the service received by the bfq_queue 4148 * during its service slot. And this sum is the quantity used in this 4149 * function to evaluate the I/O speed of a process. 4150 */ 4151 static bool bfq_bfqq_is_slow(struct bfq_data *bfqd, struct bfq_queue *bfqq, 4152 bool compensate, unsigned long *delta_ms) 4153 { 4154 ktime_t delta_ktime; 4155 u32 delta_usecs; 4156 bool slow = BFQQ_SEEKY(bfqq); /* if delta too short, use seekyness */ 4157 4158 if (!bfq_bfqq_sync(bfqq)) 4159 return false; 4160 4161 if (compensate) 4162 delta_ktime = bfqd->last_idling_start; 4163 else 4164 delta_ktime = ktime_get(); 4165 delta_ktime = ktime_sub(delta_ktime, bfqd->last_budget_start); 4166 delta_usecs = ktime_to_us(delta_ktime); 4167 4168 /* don't use too short time intervals */ 4169 if (delta_usecs < 1000) { 4170 if (blk_queue_nonrot(bfqd->queue)) 4171 /* 4172 * give same worst-case guarantees as idling 4173 * for seeky 4174 */ 4175 *delta_ms = BFQ_MIN_TT / NSEC_PER_MSEC; 4176 else /* charge at least one seek */ 4177 *delta_ms = bfq_slice_idle / NSEC_PER_MSEC; 4178 4179 return slow; 4180 } 4181 4182 *delta_ms = delta_usecs / USEC_PER_MSEC; 4183 4184 /* 4185 * Use only long (> 20ms) intervals to filter out excessive 4186 * spikes in service rate estimation. 4187 */ 4188 if (delta_usecs > 20000) { 4189 /* 4190 * Caveat for rotational devices: processes doing I/O 4191 * in the slower disk zones tend to be slow(er) even 4192 * if not seeky. In this respect, the estimated peak 4193 * rate is likely to be an average over the disk 4194 * surface. Accordingly, to not be too harsh with 4195 * unlucky processes, a process is deemed slow only if 4196 * its rate has been lower than half of the estimated 4197 * peak rate. 4198 */ 4199 slow = bfqq->entity.service < bfqd->bfq_max_budget / 2; 4200 } 4201 4202 bfq_log_bfqq(bfqd, bfqq, "bfq_bfqq_is_slow: slow %d", slow); 4203 4204 return slow; 4205 } 4206 4207 /* 4208 * To be deemed as soft real-time, an application must meet two 4209 * requirements. First, the application must not require an average 4210 * bandwidth higher than the approximate bandwidth required to playback or 4211 * record a compressed high-definition video. 4212 * The next function is invoked on the completion of the last request of a 4213 * batch, to compute the next-start time instant, soft_rt_next_start, such 4214 * that, if the next request of the application does not arrive before 4215 * soft_rt_next_start, then the above requirement on the bandwidth is met. 4216 * 4217 * The second requirement is that the request pattern of the application is 4218 * isochronous, i.e., that, after issuing a request or a batch of requests, 4219 * the application stops issuing new requests until all its pending requests 4220 * have been completed. After that, the application may issue a new batch, 4221 * and so on. 4222 * For this reason the next function is invoked to compute 4223 * soft_rt_next_start only for applications that meet this requirement, 4224 * whereas soft_rt_next_start is set to infinity for applications that do 4225 * not. 4226 * 4227 * Unfortunately, even a greedy (i.e., I/O-bound) application may 4228 * happen to meet, occasionally or systematically, both the above 4229 * bandwidth and isochrony requirements. This may happen at least in 4230 * the following circumstances. First, if the CPU load is high. The 4231 * application may stop issuing requests while the CPUs are busy 4232 * serving other processes, then restart, then stop again for a while, 4233 * and so on. The other circumstances are related to the storage 4234 * device: the storage device is highly loaded or reaches a low-enough 4235 * throughput with the I/O of the application (e.g., because the I/O 4236 * is random and/or the device is slow). In all these cases, the 4237 * I/O of the application may be simply slowed down enough to meet 4238 * the bandwidth and isochrony requirements. To reduce the probability 4239 * that greedy applications are deemed as soft real-time in these 4240 * corner cases, a further rule is used in the computation of 4241 * soft_rt_next_start: the return value of this function is forced to 4242 * be higher than the maximum between the following two quantities. 4243 * 4244 * (a) Current time plus: (1) the maximum time for which the arrival 4245 * of a request is waited for when a sync queue becomes idle, 4246 * namely bfqd->bfq_slice_idle, and (2) a few extra jiffies. We 4247 * postpone for a moment the reason for adding a few extra 4248 * jiffies; we get back to it after next item (b). Lower-bounding 4249 * the return value of this function with the current time plus 4250 * bfqd->bfq_slice_idle tends to filter out greedy applications, 4251 * because the latter issue their next request as soon as possible 4252 * after the last one has been completed. In contrast, a soft 4253 * real-time application spends some time processing data, after a 4254 * batch of its requests has been completed. 4255 * 4256 * (b) Current value of bfqq->soft_rt_next_start. As pointed out 4257 * above, greedy applications may happen to meet both the 4258 * bandwidth and isochrony requirements under heavy CPU or 4259 * storage-device load. In more detail, in these scenarios, these 4260 * applications happen, only for limited time periods, to do I/O 4261 * slowly enough to meet all the requirements described so far, 4262 * including the filtering in above item (a). These slow-speed 4263 * time intervals are usually interspersed between other time 4264 * intervals during which these applications do I/O at a very high 4265 * speed. Fortunately, exactly because of the high speed of the 4266 * I/O in the high-speed intervals, the values returned by this 4267 * function happen to be so high, near the end of any such 4268 * high-speed interval, to be likely to fall *after* the end of 4269 * the low-speed time interval that follows. These high values are 4270 * stored in bfqq->soft_rt_next_start after each invocation of 4271 * this function. As a consequence, if the last value of 4272 * bfqq->soft_rt_next_start is constantly used to lower-bound the 4273 * next value that this function may return, then, from the very 4274 * beginning of a low-speed interval, bfqq->soft_rt_next_start is 4275 * likely to be constantly kept so high that any I/O request 4276 * issued during the low-speed interval is considered as arriving 4277 * to soon for the application to be deemed as soft 4278 * real-time. Then, in the high-speed interval that follows, the 4279 * application will not be deemed as soft real-time, just because 4280 * it will do I/O at a high speed. And so on. 4281 * 4282 * Getting back to the filtering in item (a), in the following two 4283 * cases this filtering might be easily passed by a greedy 4284 * application, if the reference quantity was just 4285 * bfqd->bfq_slice_idle: 4286 * 1) HZ is so low that the duration of a jiffy is comparable to or 4287 * higher than bfqd->bfq_slice_idle. This happens, e.g., on slow 4288 * devices with HZ=100. The time granularity may be so coarse 4289 * that the approximation, in jiffies, of bfqd->bfq_slice_idle 4290 * is rather lower than the exact value. 4291 * 2) jiffies, instead of increasing at a constant rate, may stop increasing 4292 * for a while, then suddenly 'jump' by several units to recover the lost 4293 * increments. This seems to happen, e.g., inside virtual machines. 4294 * To address this issue, in the filtering in (a) we do not use as a 4295 * reference time interval just bfqd->bfq_slice_idle, but 4296 * bfqd->bfq_slice_idle plus a few jiffies. In particular, we add the 4297 * minimum number of jiffies for which the filter seems to be quite 4298 * precise also in embedded systems and KVM/QEMU virtual machines. 4299 */ 4300 static unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd, 4301 struct bfq_queue *bfqq) 4302 { 4303 return max3(bfqq->soft_rt_next_start, 4304 bfqq->last_idle_bklogged + 4305 HZ * bfqq->service_from_backlogged / 4306 bfqd->bfq_wr_max_softrt_rate, 4307 jiffies + nsecs_to_jiffies(bfqq->bfqd->bfq_slice_idle) + 4); 4308 } 4309 4310 /** 4311 * bfq_bfqq_expire - expire a queue. 4312 * @bfqd: device owning the queue. 4313 * @bfqq: the queue to expire. 4314 * @compensate: if true, compensate for the time spent idling. 4315 * @reason: the reason causing the expiration. 4316 * 4317 * If the process associated with bfqq does slow I/O (e.g., because it 4318 * issues random requests), we charge bfqq with the time it has been 4319 * in service instead of the service it has received (see 4320 * bfq_bfqq_charge_time for details on how this goal is achieved). As 4321 * a consequence, bfqq will typically get higher timestamps upon 4322 * reactivation, and hence it will be rescheduled as if it had 4323 * received more service than what it has actually received. In the 4324 * end, bfqq receives less service in proportion to how slowly its 4325 * associated process consumes its budgets (and hence how seriously it 4326 * tends to lower the throughput). In addition, this time-charging 4327 * strategy guarantees time fairness among slow processes. In 4328 * contrast, if the process associated with bfqq is not slow, we 4329 * charge bfqq exactly with the service it has received. 4330 * 4331 * Charging time to the first type of queues and the exact service to 4332 * the other has the effect of using the WF2Q+ policy to schedule the 4333 * former on a timeslice basis, without violating service domain 4334 * guarantees among the latter. 4335 */ 4336 void bfq_bfqq_expire(struct bfq_data *bfqd, 4337 struct bfq_queue *bfqq, 4338 bool compensate, 4339 enum bfqq_expiration reason) 4340 { 4341 bool slow; 4342 unsigned long delta = 0; 4343 struct bfq_entity *entity = &bfqq->entity; 4344 4345 /* 4346 * Check whether the process is slow (see bfq_bfqq_is_slow). 4347 */ 4348 slow = bfq_bfqq_is_slow(bfqd, bfqq, compensate, &delta); 4349 4350 /* 4351 * As above explained, charge slow (typically seeky) and 4352 * timed-out queues with the time and not the service 4353 * received, to favor sequential workloads. 4354 * 4355 * Processes doing I/O in the slower disk zones will tend to 4356 * be slow(er) even if not seeky. Therefore, since the 4357 * estimated peak rate is actually an average over the disk 4358 * surface, these processes may timeout just for bad luck. To 4359 * avoid punishing them, do not charge time to processes that 4360 * succeeded in consuming at least 2/3 of their budget. This 4361 * allows BFQ to preserve enough elasticity to still perform 4362 * bandwidth, and not time, distribution with little unlucky 4363 * or quasi-sequential processes. 4364 */ 4365 if (bfqq->wr_coeff == 1 && 4366 (slow || 4367 (reason == BFQQE_BUDGET_TIMEOUT && 4368 bfq_bfqq_budget_left(bfqq) >= entity->budget / 3))) 4369 bfq_bfqq_charge_time(bfqd, bfqq, delta); 4370 4371 if (bfqd->low_latency && bfqq->wr_coeff == 1) 4372 bfqq->last_wr_start_finish = jiffies; 4373 4374 if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 && 4375 RB_EMPTY_ROOT(&bfqq->sort_list)) { 4376 /* 4377 * If we get here, and there are no outstanding 4378 * requests, then the request pattern is isochronous 4379 * (see the comments on the function 4380 * bfq_bfqq_softrt_next_start()). Therefore we can 4381 * compute soft_rt_next_start. 4382 * 4383 * If, instead, the queue still has outstanding 4384 * requests, then we have to wait for the completion 4385 * of all the outstanding requests to discover whether 4386 * the request pattern is actually isochronous. 4387 */ 4388 if (bfqq->dispatched == 0) 4389 bfqq->soft_rt_next_start = 4390 bfq_bfqq_softrt_next_start(bfqd, bfqq); 4391 else if (bfqq->dispatched > 0) { 4392 /* 4393 * Schedule an update of soft_rt_next_start to when 4394 * the task may be discovered to be isochronous. 4395 */ 4396 bfq_mark_bfqq_softrt_update(bfqq); 4397 } 4398 } 4399 4400 bfq_log_bfqq(bfqd, bfqq, 4401 "expire (%d, slow %d, num_disp %d, short_ttime %d)", reason, 4402 slow, bfqq->dispatched, bfq_bfqq_has_short_ttime(bfqq)); 4403 4404 /* 4405 * bfqq expired, so no total service time needs to be computed 4406 * any longer: reset state machine for measuring total service 4407 * times. 4408 */ 4409 bfqd->rqs_injected = bfqd->wait_dispatch = false; 4410 bfqd->waited_rq = NULL; 4411 4412 /* 4413 * Increase, decrease or leave budget unchanged according to 4414 * reason. 4415 */ 4416 __bfq_bfqq_recalc_budget(bfqd, bfqq, reason); 4417 if (__bfq_bfqq_expire(bfqd, bfqq, reason)) 4418 /* bfqq is gone, no more actions on it */ 4419 return; 4420 4421 /* mark bfqq as waiting a request only if a bic still points to it */ 4422 if (!bfq_bfqq_busy(bfqq) && 4423 reason != BFQQE_BUDGET_TIMEOUT && 4424 reason != BFQQE_BUDGET_EXHAUSTED) { 4425 bfq_mark_bfqq_non_blocking_wait_rq(bfqq); 4426 /* 4427 * Not setting service to 0, because, if the next rq 4428 * arrives in time, the queue will go on receiving 4429 * service with this same budget (as if it never expired) 4430 */ 4431 } else 4432 entity->service = 0; 4433 4434 /* 4435 * Reset the received-service counter for every parent entity. 4436 * Differently from what happens with bfqq->entity.service, 4437 * the resetting of this counter never needs to be postponed 4438 * for parent entities. In fact, in case bfqq may have a 4439 * chance to go on being served using the last, partially 4440 * consumed budget, bfqq->entity.service needs to be kept, 4441 * because if bfqq then actually goes on being served using 4442 * the same budget, the last value of bfqq->entity.service is 4443 * needed to properly decrement bfqq->entity.budget by the 4444 * portion already consumed. In contrast, it is not necessary 4445 * to keep entity->service for parent entities too, because 4446 * the bubble up of the new value of bfqq->entity.budget will 4447 * make sure that the budgets of parent entities are correct, 4448 * even in case bfqq and thus parent entities go on receiving 4449 * service with the same budget. 4450 */ 4451 entity = entity->parent; 4452 for_each_entity(entity) 4453 entity->service = 0; 4454 } 4455 4456 /* 4457 * Budget timeout is not implemented through a dedicated timer, but 4458 * just checked on request arrivals and completions, as well as on 4459 * idle timer expirations. 4460 */ 4461 static bool bfq_bfqq_budget_timeout(struct bfq_queue *bfqq) 4462 { 4463 return time_is_before_eq_jiffies(bfqq->budget_timeout); 4464 } 4465 4466 /* 4467 * If we expire a queue that is actively waiting (i.e., with the 4468 * device idled) for the arrival of a new request, then we may incur 4469 * the timestamp misalignment problem described in the body of the 4470 * function __bfq_activate_entity. Hence we return true only if this 4471 * condition does not hold, or if the queue is slow enough to deserve 4472 * only to be kicked off for preserving a high throughput. 4473 */ 4474 static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq) 4475 { 4476 bfq_log_bfqq(bfqq->bfqd, bfqq, 4477 "may_budget_timeout: wait_request %d left %d timeout %d", 4478 bfq_bfqq_wait_request(bfqq), 4479 bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3, 4480 bfq_bfqq_budget_timeout(bfqq)); 4481 4482 return (!bfq_bfqq_wait_request(bfqq) || 4483 bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3) 4484 && 4485 bfq_bfqq_budget_timeout(bfqq); 4486 } 4487 4488 static bool idling_boosts_thr_without_issues(struct bfq_data *bfqd, 4489 struct bfq_queue *bfqq) 4490 { 4491 bool rot_without_queueing = 4492 !blk_queue_nonrot(bfqd->queue) && !bfqd->hw_tag, 4493 bfqq_sequential_and_IO_bound, 4494 idling_boosts_thr; 4495 4496 /* No point in idling for bfqq if it won't get requests any longer */ 4497 if (unlikely(!bfqq_process_refs(bfqq))) 4498 return false; 4499 4500 bfqq_sequential_and_IO_bound = !BFQQ_SEEKY(bfqq) && 4501 bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_has_short_ttime(bfqq); 4502 4503 /* 4504 * The next variable takes into account the cases where idling 4505 * boosts the throughput. 4506 * 4507 * The value of the variable is computed considering, first, that 4508 * idling is virtually always beneficial for the throughput if: 4509 * (a) the device is not NCQ-capable and rotational, or 4510 * (b) regardless of the presence of NCQ, the device is rotational and 4511 * the request pattern for bfqq is I/O-bound and sequential, or 4512 * (c) regardless of whether it is rotational, the device is 4513 * not NCQ-capable and the request pattern for bfqq is 4514 * I/O-bound and sequential. 4515 * 4516 * Secondly, and in contrast to the above item (b), idling an 4517 * NCQ-capable flash-based device would not boost the 4518 * throughput even with sequential I/O; rather it would lower 4519 * the throughput in proportion to how fast the device 4520 * is. Accordingly, the next variable is true if any of the 4521 * above conditions (a), (b) or (c) is true, and, in 4522 * particular, happens to be false if bfqd is an NCQ-capable 4523 * flash-based device. 4524 */ 4525 idling_boosts_thr = rot_without_queueing || 4526 ((!blk_queue_nonrot(bfqd->queue) || !bfqd->hw_tag) && 4527 bfqq_sequential_and_IO_bound); 4528 4529 /* 4530 * The return value of this function is equal to that of 4531 * idling_boosts_thr, unless a special case holds. In this 4532 * special case, described below, idling may cause problems to 4533 * weight-raised queues. 4534 * 4535 * When the request pool is saturated (e.g., in the presence 4536 * of write hogs), if the processes associated with 4537 * non-weight-raised queues ask for requests at a lower rate, 4538 * then processes associated with weight-raised queues have a 4539 * higher probability to get a request from the pool 4540 * immediately (or at least soon) when they need one. Thus 4541 * they have a higher probability to actually get a fraction 4542 * of the device throughput proportional to their high 4543 * weight. This is especially true with NCQ-capable drives, 4544 * which enqueue several requests in advance, and further 4545 * reorder internally-queued requests. 4546 * 4547 * For this reason, we force to false the return value if 4548 * there are weight-raised busy queues. In this case, and if 4549 * bfqq is not weight-raised, this guarantees that the device 4550 * is not idled for bfqq (if, instead, bfqq is weight-raised, 4551 * then idling will be guaranteed by another variable, see 4552 * below). Combined with the timestamping rules of BFQ (see 4553 * [1] for details), this behavior causes bfqq, and hence any 4554 * sync non-weight-raised queue, to get a lower number of 4555 * requests served, and thus to ask for a lower number of 4556 * requests from the request pool, before the busy 4557 * weight-raised queues get served again. This often mitigates 4558 * starvation problems in the presence of heavy write 4559 * workloads and NCQ, thereby guaranteeing a higher 4560 * application and system responsiveness in these hostile 4561 * scenarios. 4562 */ 4563 return idling_boosts_thr && 4564 bfqd->wr_busy_queues == 0; 4565 } 4566 4567 /* 4568 * For a queue that becomes empty, device idling is allowed only if 4569 * this function returns true for that queue. As a consequence, since 4570 * device idling plays a critical role for both throughput boosting 4571 * and service guarantees, the return value of this function plays a 4572 * critical role as well. 4573 * 4574 * In a nutshell, this function returns true only if idling is 4575 * beneficial for throughput or, even if detrimental for throughput, 4576 * idling is however necessary to preserve service guarantees (low 4577 * latency, desired throughput distribution, ...). In particular, on 4578 * NCQ-capable devices, this function tries to return false, so as to 4579 * help keep the drives' internal queues full, whenever this helps the 4580 * device boost the throughput without causing any service-guarantee 4581 * issue. 4582 * 4583 * Most of the issues taken into account to get the return value of 4584 * this function are not trivial. We discuss these issues in the two 4585 * functions providing the main pieces of information needed by this 4586 * function. 4587 */ 4588 static bool bfq_better_to_idle(struct bfq_queue *bfqq) 4589 { 4590 struct bfq_data *bfqd = bfqq->bfqd; 4591 bool idling_boosts_thr_with_no_issue, idling_needed_for_service_guar; 4592 4593 /* No point in idling for bfqq if it won't get requests any longer */ 4594 if (unlikely(!bfqq_process_refs(bfqq))) 4595 return false; 4596 4597 if (unlikely(bfqd->strict_guarantees)) 4598 return true; 4599 4600 /* 4601 * Idling is performed only if slice_idle > 0. In addition, we 4602 * do not idle if 4603 * (a) bfqq is async 4604 * (b) bfqq is in the idle io prio class: in this case we do 4605 * not idle because we want to minimize the bandwidth that 4606 * queues in this class can steal to higher-priority queues 4607 */ 4608 if (bfqd->bfq_slice_idle == 0 || !bfq_bfqq_sync(bfqq) || 4609 bfq_class_idle(bfqq)) 4610 return false; 4611 4612 idling_boosts_thr_with_no_issue = 4613 idling_boosts_thr_without_issues(bfqd, bfqq); 4614 4615 idling_needed_for_service_guar = 4616 idling_needed_for_service_guarantees(bfqd, bfqq); 4617 4618 /* 4619 * We have now the two components we need to compute the 4620 * return value of the function, which is true only if idling 4621 * either boosts the throughput (without issues), or is 4622 * necessary to preserve service guarantees. 4623 */ 4624 return idling_boosts_thr_with_no_issue || 4625 idling_needed_for_service_guar; 4626 } 4627 4628 /* 4629 * If the in-service queue is empty but the function bfq_better_to_idle 4630 * returns true, then: 4631 * 1) the queue must remain in service and cannot be expired, and 4632 * 2) the device must be idled to wait for the possible arrival of a new 4633 * request for the queue. 4634 * See the comments on the function bfq_better_to_idle for the reasons 4635 * why performing device idling is the best choice to boost the throughput 4636 * and preserve service guarantees when bfq_better_to_idle itself 4637 * returns true. 4638 */ 4639 static bool bfq_bfqq_must_idle(struct bfq_queue *bfqq) 4640 { 4641 return RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_better_to_idle(bfqq); 4642 } 4643 4644 /* 4645 * This function chooses the queue from which to pick the next extra 4646 * I/O request to inject, if it finds a compatible queue. See the 4647 * comments on bfq_update_inject_limit() for details on the injection 4648 * mechanism, and for the definitions of the quantities mentioned 4649 * below. 4650 */ 4651 static struct bfq_queue * 4652 bfq_choose_bfqq_for_injection(struct bfq_data *bfqd) 4653 { 4654 struct bfq_queue *bfqq, *in_serv_bfqq = bfqd->in_service_queue; 4655 unsigned int limit = in_serv_bfqq->inject_limit; 4656 int i; 4657 4658 /* 4659 * If 4660 * - bfqq is not weight-raised and therefore does not carry 4661 * time-critical I/O, 4662 * or 4663 * - regardless of whether bfqq is weight-raised, bfqq has 4664 * however a long think time, during which it can absorb the 4665 * effect of an appropriate number of extra I/O requests 4666 * from other queues (see bfq_update_inject_limit for 4667 * details on the computation of this number); 4668 * then injection can be performed without restrictions. 4669 */ 4670 bool in_serv_always_inject = in_serv_bfqq->wr_coeff == 1 || 4671 !bfq_bfqq_has_short_ttime(in_serv_bfqq); 4672 4673 /* 4674 * If 4675 * - the baseline total service time could not be sampled yet, 4676 * so the inject limit happens to be still 0, and 4677 * - a lot of time has elapsed since the plugging of I/O 4678 * dispatching started, so drive speed is being wasted 4679 * significantly; 4680 * then temporarily raise inject limit to one request. 4681 */ 4682 if (limit == 0 && in_serv_bfqq->last_serv_time_ns == 0 && 4683 bfq_bfqq_wait_request(in_serv_bfqq) && 4684 time_is_before_eq_jiffies(bfqd->last_idling_start_jiffies + 4685 bfqd->bfq_slice_idle) 4686 ) 4687 limit = 1; 4688 4689 if (bfqd->tot_rq_in_driver >= limit) 4690 return NULL; 4691 4692 /* 4693 * Linear search of the source queue for injection; but, with 4694 * a high probability, very few steps are needed to find a 4695 * candidate queue, i.e., a queue with enough budget left for 4696 * its next request. In fact: 4697 * - BFQ dynamically updates the budget of every queue so as 4698 * to accommodate the expected backlog of the queue; 4699 * - if a queue gets all its requests dispatched as injected 4700 * service, then the queue is removed from the active list 4701 * (and re-added only if it gets new requests, but then it 4702 * is assigned again enough budget for its new backlog). 4703 */ 4704 for (i = 0; i < bfqd->num_actuators; i++) { 4705 list_for_each_entry(bfqq, &bfqd->active_list[i], bfqq_list) 4706 if (!RB_EMPTY_ROOT(&bfqq->sort_list) && 4707 (in_serv_always_inject || bfqq->wr_coeff > 1) && 4708 bfq_serv_to_charge(bfqq->next_rq, bfqq) <= 4709 bfq_bfqq_budget_left(bfqq)) { 4710 /* 4711 * Allow for only one large in-flight request 4712 * on non-rotational devices, for the 4713 * following reason. On non-rotationl drives, 4714 * large requests take much longer than 4715 * smaller requests to be served. In addition, 4716 * the drive prefers to serve large requests 4717 * w.r.t. to small ones, if it can choose. So, 4718 * having more than one large requests queued 4719 * in the drive may easily make the next first 4720 * request of the in-service queue wait for so 4721 * long to break bfqq's service guarantees. On 4722 * the bright side, large requests let the 4723 * drive reach a very high throughput, even if 4724 * there is only one in-flight large request 4725 * at a time. 4726 */ 4727 if (blk_queue_nonrot(bfqd->queue) && 4728 blk_rq_sectors(bfqq->next_rq) >= 4729 BFQQ_SECT_THR_NONROT && 4730 bfqd->tot_rq_in_driver >= 1) 4731 continue; 4732 else { 4733 bfqd->rqs_injected = true; 4734 return bfqq; 4735 } 4736 } 4737 } 4738 4739 return NULL; 4740 } 4741 4742 static struct bfq_queue * 4743 bfq_find_active_bfqq_for_actuator(struct bfq_data *bfqd, int idx) 4744 { 4745 struct bfq_queue *bfqq; 4746 4747 if (bfqd->in_service_queue && 4748 bfqd->in_service_queue->actuator_idx == idx) 4749 return bfqd->in_service_queue; 4750 4751 list_for_each_entry(bfqq, &bfqd->active_list[idx], bfqq_list) { 4752 if (!RB_EMPTY_ROOT(&bfqq->sort_list) && 4753 bfq_serv_to_charge(bfqq->next_rq, bfqq) <= 4754 bfq_bfqq_budget_left(bfqq)) { 4755 return bfqq; 4756 } 4757 } 4758 4759 return NULL; 4760 } 4761 4762 /* 4763 * Perform a linear scan of each actuator, until an actuator is found 4764 * for which the following three conditions hold: the load of the 4765 * actuator is below the threshold (see comments on 4766 * actuator_load_threshold for details) and lower than that of the 4767 * next actuator (comments on this extra condition below), and there 4768 * is a queue that contains I/O for that actuator. On success, return 4769 * that queue. 4770 * 4771 * Performing a plain linear scan entails a prioritization among 4772 * actuators. The extra condition above breaks this prioritization and 4773 * tends to distribute injection uniformly across actuators. 4774 */ 4775 static struct bfq_queue * 4776 bfq_find_bfqq_for_underused_actuator(struct bfq_data *bfqd) 4777 { 4778 int i; 4779 4780 for (i = 0 ; i < bfqd->num_actuators; i++) { 4781 if (bfqd->rq_in_driver[i] < bfqd->actuator_load_threshold && 4782 (i == bfqd->num_actuators - 1 || 4783 bfqd->rq_in_driver[i] < bfqd->rq_in_driver[i+1])) { 4784 struct bfq_queue *bfqq = 4785 bfq_find_active_bfqq_for_actuator(bfqd, i); 4786 4787 if (bfqq) 4788 return bfqq; 4789 } 4790 } 4791 4792 return NULL; 4793 } 4794 4795 4796 /* 4797 * Select a queue for service. If we have a current queue in service, 4798 * check whether to continue servicing it, or retrieve and set a new one. 4799 */ 4800 static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd) 4801 { 4802 struct bfq_queue *bfqq, *inject_bfqq; 4803 struct request *next_rq; 4804 enum bfqq_expiration reason = BFQQE_BUDGET_TIMEOUT; 4805 4806 bfqq = bfqd->in_service_queue; 4807 if (!bfqq) 4808 goto new_queue; 4809 4810 bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue"); 4811 4812 /* 4813 * Do not expire bfqq for budget timeout if bfqq may be about 4814 * to enjoy device idling. The reason why, in this case, we 4815 * prevent bfqq from expiring is the same as in the comments 4816 * on the case where bfq_bfqq_must_idle() returns true, in 4817 * bfq_completed_request(). 4818 */ 4819 if (bfq_may_expire_for_budg_timeout(bfqq) && 4820 !bfq_bfqq_must_idle(bfqq)) 4821 goto expire; 4822 4823 check_queue: 4824 /* 4825 * If some actuator is underutilized, but the in-service 4826 * queue does not contain I/O for that actuator, then try to 4827 * inject I/O for that actuator. 4828 */ 4829 inject_bfqq = bfq_find_bfqq_for_underused_actuator(bfqd); 4830 if (inject_bfqq && inject_bfqq != bfqq) 4831 return inject_bfqq; 4832 4833 /* 4834 * This loop is rarely executed more than once. Even when it 4835 * happens, it is much more convenient to re-execute this loop 4836 * than to return NULL and trigger a new dispatch to get a 4837 * request served. 4838 */ 4839 next_rq = bfqq->next_rq; 4840 /* 4841 * If bfqq has requests queued and it has enough budget left to 4842 * serve them, keep the queue, otherwise expire it. 4843 */ 4844 if (next_rq) { 4845 if (bfq_serv_to_charge(next_rq, bfqq) > 4846 bfq_bfqq_budget_left(bfqq)) { 4847 /* 4848 * Expire the queue for budget exhaustion, 4849 * which makes sure that the next budget is 4850 * enough to serve the next request, even if 4851 * it comes from the fifo expired path. 4852 */ 4853 reason = BFQQE_BUDGET_EXHAUSTED; 4854 goto expire; 4855 } else { 4856 /* 4857 * The idle timer may be pending because we may 4858 * not disable disk idling even when a new request 4859 * arrives. 4860 */ 4861 if (bfq_bfqq_wait_request(bfqq)) { 4862 /* 4863 * If we get here: 1) at least a new request 4864 * has arrived but we have not disabled the 4865 * timer because the request was too small, 4866 * 2) then the block layer has unplugged 4867 * the device, causing the dispatch to be 4868 * invoked. 4869 * 4870 * Since the device is unplugged, now the 4871 * requests are probably large enough to 4872 * provide a reasonable throughput. 4873 * So we disable idling. 4874 */ 4875 bfq_clear_bfqq_wait_request(bfqq); 4876 hrtimer_try_to_cancel(&bfqd->idle_slice_timer); 4877 } 4878 goto keep_queue; 4879 } 4880 } 4881 4882 /* 4883 * No requests pending. However, if the in-service queue is idling 4884 * for a new request, or has requests waiting for a completion and 4885 * may idle after their completion, then keep it anyway. 4886 * 4887 * Yet, inject service from other queues if it boosts 4888 * throughput and is possible. 4889 */ 4890 if (bfq_bfqq_wait_request(bfqq) || 4891 (bfqq->dispatched != 0 && bfq_better_to_idle(bfqq))) { 4892 unsigned int act_idx = bfqq->actuator_idx; 4893 struct bfq_queue *async_bfqq = NULL; 4894 struct bfq_queue *blocked_bfqq = 4895 !hlist_empty(&bfqq->woken_list) ? 4896 container_of(bfqq->woken_list.first, 4897 struct bfq_queue, 4898 woken_list_node) 4899 : NULL; 4900 4901 if (bfqq->bic && bfqq->bic->bfqq[0][act_idx] && 4902 bfq_bfqq_busy(bfqq->bic->bfqq[0][act_idx]) && 4903 bfqq->bic->bfqq[0][act_idx]->next_rq) 4904 async_bfqq = bfqq->bic->bfqq[0][act_idx]; 4905 /* 4906 * The next four mutually-exclusive ifs decide 4907 * whether to try injection, and choose the queue to 4908 * pick an I/O request from. 4909 * 4910 * The first if checks whether the process associated 4911 * with bfqq has also async I/O pending. If so, it 4912 * injects such I/O unconditionally. Injecting async 4913 * I/O from the same process can cause no harm to the 4914 * process. On the contrary, it can only increase 4915 * bandwidth and reduce latency for the process. 4916 * 4917 * The second if checks whether there happens to be a 4918 * non-empty waker queue for bfqq, i.e., a queue whose 4919 * I/O needs to be completed for bfqq to receive new 4920 * I/O. This happens, e.g., if bfqq is associated with 4921 * a process that does some sync. A sync generates 4922 * extra blocking I/O, which must be completed before 4923 * the process associated with bfqq can go on with its 4924 * I/O. If the I/O of the waker queue is not served, 4925 * then bfqq remains empty, and no I/O is dispatched, 4926 * until the idle timeout fires for bfqq. This is 4927 * likely to result in lower bandwidth and higher 4928 * latencies for bfqq, and in a severe loss of total 4929 * throughput. The best action to take is therefore to 4930 * serve the waker queue as soon as possible. So do it 4931 * (without relying on the third alternative below for 4932 * eventually serving waker_bfqq's I/O; see the last 4933 * paragraph for further details). This systematic 4934 * injection of I/O from the waker queue does not 4935 * cause any delay to bfqq's I/O. On the contrary, 4936 * next bfqq's I/O is brought forward dramatically, 4937 * for it is not blocked for milliseconds. 4938 * 4939 * The third if checks whether there is a queue woken 4940 * by bfqq, and currently with pending I/O. Such a 4941 * woken queue does not steal bandwidth from bfqq, 4942 * because it remains soon without I/O if bfqq is not 4943 * served. So there is virtually no risk of loss of 4944 * bandwidth for bfqq if this woken queue has I/O 4945 * dispatched while bfqq is waiting for new I/O. 4946 * 4947 * The fourth if checks whether bfqq is a queue for 4948 * which it is better to avoid injection. It is so if 4949 * bfqq delivers more throughput when served without 4950 * any further I/O from other queues in the middle, or 4951 * if the service times of bfqq's I/O requests both 4952 * count more than overall throughput, and may be 4953 * easily increased by injection (this happens if bfqq 4954 * has a short think time). If none of these 4955 * conditions holds, then a candidate queue for 4956 * injection is looked for through 4957 * bfq_choose_bfqq_for_injection(). Note that the 4958 * latter may return NULL (for example if the inject 4959 * limit for bfqq is currently 0). 4960 * 4961 * NOTE: motivation for the second alternative 4962 * 4963 * Thanks to the way the inject limit is updated in 4964 * bfq_update_has_short_ttime(), it is rather likely 4965 * that, if I/O is being plugged for bfqq and the 4966 * waker queue has pending I/O requests that are 4967 * blocking bfqq's I/O, then the fourth alternative 4968 * above lets the waker queue get served before the 4969 * I/O-plugging timeout fires. So one may deem the 4970 * second alternative superfluous. It is not, because 4971 * the fourth alternative may be way less effective in 4972 * case of a synchronization. For two main 4973 * reasons. First, throughput may be low because the 4974 * inject limit may be too low to guarantee the same 4975 * amount of injected I/O, from the waker queue or 4976 * other queues, that the second alternative 4977 * guarantees (the second alternative unconditionally 4978 * injects a pending I/O request of the waker queue 4979 * for each bfq_dispatch_request()). Second, with the 4980 * fourth alternative, the duration of the plugging, 4981 * i.e., the time before bfqq finally receives new I/O, 4982 * may not be minimized, because the waker queue may 4983 * happen to be served only after other queues. 4984 */ 4985 if (async_bfqq && 4986 icq_to_bic(async_bfqq->next_rq->elv.icq) == bfqq->bic && 4987 bfq_serv_to_charge(async_bfqq->next_rq, async_bfqq) <= 4988 bfq_bfqq_budget_left(async_bfqq)) 4989 bfqq = async_bfqq; 4990 else if (bfqq->waker_bfqq && 4991 bfq_bfqq_busy(bfqq->waker_bfqq) && 4992 bfqq->waker_bfqq->next_rq && 4993 bfq_serv_to_charge(bfqq->waker_bfqq->next_rq, 4994 bfqq->waker_bfqq) <= 4995 bfq_bfqq_budget_left(bfqq->waker_bfqq) 4996 ) 4997 bfqq = bfqq->waker_bfqq; 4998 else if (blocked_bfqq && 4999 bfq_bfqq_busy(blocked_bfqq) && 5000 blocked_bfqq->next_rq && 5001 bfq_serv_to_charge(blocked_bfqq->next_rq, 5002 blocked_bfqq) <= 5003 bfq_bfqq_budget_left(blocked_bfqq) 5004 ) 5005 bfqq = blocked_bfqq; 5006 else if (!idling_boosts_thr_without_issues(bfqd, bfqq) && 5007 (bfqq->wr_coeff == 1 || bfqd->wr_busy_queues > 1 || 5008 !bfq_bfqq_has_short_ttime(bfqq))) 5009 bfqq = bfq_choose_bfqq_for_injection(bfqd); 5010 else 5011 bfqq = NULL; 5012 5013 goto keep_queue; 5014 } 5015 5016 reason = BFQQE_NO_MORE_REQUESTS; 5017 expire: 5018 bfq_bfqq_expire(bfqd, bfqq, false, reason); 5019 new_queue: 5020 bfqq = bfq_set_in_service_queue(bfqd); 5021 if (bfqq) { 5022 bfq_log_bfqq(bfqd, bfqq, "select_queue: checking new queue"); 5023 goto check_queue; 5024 } 5025 keep_queue: 5026 if (bfqq) 5027 bfq_log_bfqq(bfqd, bfqq, "select_queue: returned this queue"); 5028 else 5029 bfq_log(bfqd, "select_queue: no queue returned"); 5030 5031 return bfqq; 5032 } 5033 5034 static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq) 5035 { 5036 struct bfq_entity *entity = &bfqq->entity; 5037 5038 if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */ 5039 bfq_log_bfqq(bfqd, bfqq, 5040 "raising period dur %u/%u msec, old coeff %u, w %d(%d)", 5041 jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish), 5042 jiffies_to_msecs(bfqq->wr_cur_max_time), 5043 bfqq->wr_coeff, 5044 bfqq->entity.weight, bfqq->entity.orig_weight); 5045 5046 if (entity->prio_changed) 5047 bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change"); 5048 5049 /* 5050 * If the queue was activated in a burst, or too much 5051 * time has elapsed from the beginning of this 5052 * weight-raising period, then end weight raising. 5053 */ 5054 if (bfq_bfqq_in_large_burst(bfqq)) 5055 bfq_bfqq_end_wr(bfqq); 5056 else if (time_is_before_jiffies(bfqq->last_wr_start_finish + 5057 bfqq->wr_cur_max_time)) { 5058 if (bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time || 5059 time_is_before_jiffies(bfqq->wr_start_at_switch_to_srt + 5060 bfq_wr_duration(bfqd))) { 5061 /* 5062 * Either in interactive weight 5063 * raising, or in soft_rt weight 5064 * raising with the 5065 * interactive-weight-raising period 5066 * elapsed (so no switch back to 5067 * interactive weight raising). 5068 */ 5069 bfq_bfqq_end_wr(bfqq); 5070 } else { /* 5071 * soft_rt finishing while still in 5072 * interactive period, switch back to 5073 * interactive weight raising 5074 */ 5075 switch_back_to_interactive_wr(bfqq, bfqd); 5076 bfqq->entity.prio_changed = 1; 5077 } 5078 } 5079 if (bfqq->wr_coeff > 1 && 5080 bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time && 5081 bfqq->service_from_wr > max_service_from_wr) { 5082 /* see comments on max_service_from_wr */ 5083 bfq_bfqq_end_wr(bfqq); 5084 } 5085 } 5086 /* 5087 * To improve latency (for this or other queues), immediately 5088 * update weight both if it must be raised and if it must be 5089 * lowered. Since, entity may be on some active tree here, and 5090 * might have a pending change of its ioprio class, invoke 5091 * next function with the last parameter unset (see the 5092 * comments on the function). 5093 */ 5094 if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1)) 5095 __bfq_entity_update_weight_prio(bfq_entity_service_tree(entity), 5096 entity, false); 5097 } 5098 5099 /* 5100 * Dispatch next request from bfqq. 5101 */ 5102 static struct request *bfq_dispatch_rq_from_bfqq(struct bfq_data *bfqd, 5103 struct bfq_queue *bfqq) 5104 { 5105 struct request *rq = bfqq->next_rq; 5106 unsigned long service_to_charge; 5107 5108 service_to_charge = bfq_serv_to_charge(rq, bfqq); 5109 5110 bfq_bfqq_served(bfqq, service_to_charge); 5111 5112 if (bfqq == bfqd->in_service_queue && bfqd->wait_dispatch) { 5113 bfqd->wait_dispatch = false; 5114 bfqd->waited_rq = rq; 5115 } 5116 5117 bfq_dispatch_remove(bfqd->queue, rq); 5118 5119 if (bfqq != bfqd->in_service_queue) 5120 return rq; 5121 5122 /* 5123 * If weight raising has to terminate for bfqq, then next 5124 * function causes an immediate update of bfqq's weight, 5125 * without waiting for next activation. As a consequence, on 5126 * expiration, bfqq will be timestamped as if has never been 5127 * weight-raised during this service slot, even if it has 5128 * received part or even most of the service as a 5129 * weight-raised queue. This inflates bfqq's timestamps, which 5130 * is beneficial, as bfqq is then more willing to leave the 5131 * device immediately to possible other weight-raised queues. 5132 */ 5133 bfq_update_wr_data(bfqd, bfqq); 5134 5135 /* 5136 * Expire bfqq, pretending that its budget expired, if bfqq 5137 * belongs to CLASS_IDLE and other queues are waiting for 5138 * service. 5139 */ 5140 if (bfq_tot_busy_queues(bfqd) > 1 && bfq_class_idle(bfqq)) 5141 bfq_bfqq_expire(bfqd, bfqq, false, BFQQE_BUDGET_EXHAUSTED); 5142 5143 return rq; 5144 } 5145 5146 static bool bfq_has_work(struct blk_mq_hw_ctx *hctx) 5147 { 5148 struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; 5149 5150 /* 5151 * Avoiding lock: a race on bfqd->queued should cause at 5152 * most a call to dispatch for nothing 5153 */ 5154 return !list_empty_careful(&bfqd->dispatch) || 5155 READ_ONCE(bfqd->queued); 5156 } 5157 5158 static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) 5159 { 5160 struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; 5161 struct request *rq = NULL; 5162 struct bfq_queue *bfqq = NULL; 5163 5164 if (!list_empty(&bfqd->dispatch)) { 5165 rq = list_first_entry(&bfqd->dispatch, struct request, 5166 queuelist); 5167 list_del_init(&rq->queuelist); 5168 5169 bfqq = RQ_BFQQ(rq); 5170 5171 if (bfqq) { 5172 /* 5173 * Increment counters here, because this 5174 * dispatch does not follow the standard 5175 * dispatch flow (where counters are 5176 * incremented) 5177 */ 5178 bfqq->dispatched++; 5179 5180 goto inc_in_driver_start_rq; 5181 } 5182 5183 /* 5184 * We exploit the bfq_finish_requeue_request hook to 5185 * decrement tot_rq_in_driver, but 5186 * bfq_finish_requeue_request will not be invoked on 5187 * this request. So, to avoid unbalance, just start 5188 * this request, without incrementing tot_rq_in_driver. As 5189 * a negative consequence, tot_rq_in_driver is deceptively 5190 * lower than it should be while this request is in 5191 * service. This may cause bfq_schedule_dispatch to be 5192 * invoked uselessly. 5193 * 5194 * As for implementing an exact solution, the 5195 * bfq_finish_requeue_request hook, if defined, is 5196 * probably invoked also on this request. So, by 5197 * exploiting this hook, we could 1) increment 5198 * tot_rq_in_driver here, and 2) decrement it in 5199 * bfq_finish_requeue_request. Such a solution would 5200 * let the value of the counter be always accurate, 5201 * but it would entail using an extra interface 5202 * function. This cost seems higher than the benefit, 5203 * being the frequency of non-elevator-private 5204 * requests very low. 5205 */ 5206 goto start_rq; 5207 } 5208 5209 bfq_log(bfqd, "dispatch requests: %d busy queues", 5210 bfq_tot_busy_queues(bfqd)); 5211 5212 if (bfq_tot_busy_queues(bfqd) == 0) 5213 goto exit; 5214 5215 /* 5216 * Force device to serve one request at a time if 5217 * strict_guarantees is true. Forcing this service scheme is 5218 * currently the ONLY way to guarantee that the request 5219 * service order enforced by the scheduler is respected by a 5220 * queueing device. Otherwise the device is free even to make 5221 * some unlucky request wait for as long as the device 5222 * wishes. 5223 * 5224 * Of course, serving one request at a time may cause loss of 5225 * throughput. 5226 */ 5227 if (bfqd->strict_guarantees && bfqd->tot_rq_in_driver > 0) 5228 goto exit; 5229 5230 bfqq = bfq_select_queue(bfqd); 5231 if (!bfqq) 5232 goto exit; 5233 5234 rq = bfq_dispatch_rq_from_bfqq(bfqd, bfqq); 5235 5236 if (rq) { 5237 inc_in_driver_start_rq: 5238 bfqd->rq_in_driver[bfqq->actuator_idx]++; 5239 bfqd->tot_rq_in_driver++; 5240 start_rq: 5241 rq->rq_flags |= RQF_STARTED; 5242 } 5243 exit: 5244 return rq; 5245 } 5246 5247 #ifdef CONFIG_BFQ_CGROUP_DEBUG 5248 static void bfq_update_dispatch_stats(struct request_queue *q, 5249 struct request *rq, 5250 struct bfq_queue *in_serv_queue, 5251 bool idle_timer_disabled) 5252 { 5253 struct bfq_queue *bfqq = rq ? RQ_BFQQ(rq) : NULL; 5254 5255 if (!idle_timer_disabled && !bfqq) 5256 return; 5257 5258 /* 5259 * rq and bfqq are guaranteed to exist until this function 5260 * ends, for the following reasons. First, rq can be 5261 * dispatched to the device, and then can be completed and 5262 * freed, only after this function ends. Second, rq cannot be 5263 * merged (and thus freed because of a merge) any longer, 5264 * because it has already started. Thus rq cannot be freed 5265 * before this function ends, and, since rq has a reference to 5266 * bfqq, the same guarantee holds for bfqq too. 5267 * 5268 * In addition, the following queue lock guarantees that 5269 * bfqq_group(bfqq) exists as well. 5270 */ 5271 spin_lock_irq(&q->queue_lock); 5272 if (idle_timer_disabled) 5273 /* 5274 * Since the idle timer has been disabled, 5275 * in_serv_queue contained some request when 5276 * __bfq_dispatch_request was invoked above, which 5277 * implies that rq was picked exactly from 5278 * in_serv_queue. Thus in_serv_queue == bfqq, and is 5279 * therefore guaranteed to exist because of the above 5280 * arguments. 5281 */ 5282 bfqg_stats_update_idle_time(bfqq_group(in_serv_queue)); 5283 if (bfqq) { 5284 struct bfq_group *bfqg = bfqq_group(bfqq); 5285 5286 bfqg_stats_update_avg_queue_size(bfqg); 5287 bfqg_stats_set_start_empty_time(bfqg); 5288 bfqg_stats_update_io_remove(bfqg, rq->cmd_flags); 5289 } 5290 spin_unlock_irq(&q->queue_lock); 5291 } 5292 #else 5293 static inline void bfq_update_dispatch_stats(struct request_queue *q, 5294 struct request *rq, 5295 struct bfq_queue *in_serv_queue, 5296 bool idle_timer_disabled) {} 5297 #endif /* CONFIG_BFQ_CGROUP_DEBUG */ 5298 5299 static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) 5300 { 5301 struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; 5302 struct request *rq; 5303 struct bfq_queue *in_serv_queue; 5304 bool waiting_rq, idle_timer_disabled = false; 5305 5306 spin_lock_irq(&bfqd->lock); 5307 5308 in_serv_queue = bfqd->in_service_queue; 5309 waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue); 5310 5311 rq = __bfq_dispatch_request(hctx); 5312 if (in_serv_queue == bfqd->in_service_queue) { 5313 idle_timer_disabled = 5314 waiting_rq && !bfq_bfqq_wait_request(in_serv_queue); 5315 } 5316 5317 spin_unlock_irq(&bfqd->lock); 5318 bfq_update_dispatch_stats(hctx->queue, rq, 5319 idle_timer_disabled ? in_serv_queue : NULL, 5320 idle_timer_disabled); 5321 5322 return rq; 5323 } 5324 5325 /* 5326 * Task holds one reference to the queue, dropped when task exits. Each rq 5327 * in-flight on this queue also holds a reference, dropped when rq is freed. 5328 * 5329 * Scheduler lock must be held here. Recall not to use bfqq after calling 5330 * this function on it. 5331 */ 5332 void bfq_put_queue(struct bfq_queue *bfqq) 5333 { 5334 struct bfq_queue *item; 5335 struct hlist_node *n; 5336 struct bfq_group *bfqg = bfqq_group(bfqq); 5337 5338 bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p %d", bfqq, bfqq->ref); 5339 5340 bfqq->ref--; 5341 if (bfqq->ref) 5342 return; 5343 5344 if (!hlist_unhashed(&bfqq->burst_list_node)) { 5345 hlist_del_init(&bfqq->burst_list_node); 5346 /* 5347 * Decrement also burst size after the removal, if the 5348 * process associated with bfqq is exiting, and thus 5349 * does not contribute to the burst any longer. This 5350 * decrement helps filter out false positives of large 5351 * bursts, when some short-lived process (often due to 5352 * the execution of commands by some service) happens 5353 * to start and exit while a complex application is 5354 * starting, and thus spawning several processes that 5355 * do I/O (and that *must not* be treated as a large 5356 * burst, see comments on bfq_handle_burst). 5357 * 5358 * In particular, the decrement is performed only if: 5359 * 1) bfqq is not a merged queue, because, if it is, 5360 * then this free of bfqq is not triggered by the exit 5361 * of the process bfqq is associated with, but exactly 5362 * by the fact that bfqq has just been merged. 5363 * 2) burst_size is greater than 0, to handle 5364 * unbalanced decrements. Unbalanced decrements may 5365 * happen in te following case: bfqq is inserted into 5366 * the current burst list--without incrementing 5367 * bust_size--because of a split, but the current 5368 * burst list is not the burst list bfqq belonged to 5369 * (see comments on the case of a split in 5370 * bfq_set_request). 5371 */ 5372 if (bfqq->bic && bfqq->bfqd->burst_size > 0) 5373 bfqq->bfqd->burst_size--; 5374 } 5375 5376 /* 5377 * bfqq does not exist any longer, so it cannot be woken by 5378 * any other queue, and cannot wake any other queue. Then bfqq 5379 * must be removed from the woken list of its possible waker 5380 * queue, and all queues in the woken list of bfqq must stop 5381 * having a waker queue. Strictly speaking, these updates 5382 * should be performed when bfqq remains with no I/O source 5383 * attached to it, which happens before bfqq gets freed. In 5384 * particular, this happens when the last process associated 5385 * with bfqq exits or gets associated with a different 5386 * queue. However, both events lead to bfqq being freed soon, 5387 * and dangling references would come out only after bfqq gets 5388 * freed. So these updates are done here, as a simple and safe 5389 * way to handle all cases. 5390 */ 5391 /* remove bfqq from woken list */ 5392 if (!hlist_unhashed(&bfqq->woken_list_node)) 5393 hlist_del_init(&bfqq->woken_list_node); 5394 5395 /* reset waker for all queues in woken list */ 5396 hlist_for_each_entry_safe(item, n, &bfqq->woken_list, 5397 woken_list_node) { 5398 item->waker_bfqq = NULL; 5399 hlist_del_init(&item->woken_list_node); 5400 } 5401 5402 if (bfqq->bfqd->last_completed_rq_bfqq == bfqq) 5403 bfqq->bfqd->last_completed_rq_bfqq = NULL; 5404 5405 kmem_cache_free(bfq_pool, bfqq); 5406 bfqg_and_blkg_put(bfqg); 5407 } 5408 5409 static void bfq_put_stable_ref(struct bfq_queue *bfqq) 5410 { 5411 bfqq->stable_ref--; 5412 bfq_put_queue(bfqq); 5413 } 5414 5415 void bfq_put_cooperator(struct bfq_queue *bfqq) 5416 { 5417 struct bfq_queue *__bfqq, *next; 5418 5419 /* 5420 * If this queue was scheduled to merge with another queue, be 5421 * sure to drop the reference taken on that queue (and others in 5422 * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs. 5423 */ 5424 __bfqq = bfqq->new_bfqq; 5425 while (__bfqq) { 5426 next = __bfqq->new_bfqq; 5427 bfq_put_queue(__bfqq); 5428 __bfqq = next; 5429 } 5430 } 5431 5432 static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) 5433 { 5434 if (bfqq == bfqd->in_service_queue) { 5435 __bfq_bfqq_expire(bfqd, bfqq, BFQQE_BUDGET_TIMEOUT); 5436 bfq_schedule_dispatch(bfqd); 5437 } 5438 5439 bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref); 5440 5441 bfq_put_cooperator(bfqq); 5442 5443 bfq_release_process_ref(bfqd, bfqq); 5444 } 5445 5446 static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync, 5447 unsigned int actuator_idx) 5448 { 5449 struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, actuator_idx); 5450 struct bfq_data *bfqd; 5451 5452 if (bfqq) 5453 bfqd = bfqq->bfqd; /* NULL if scheduler already exited */ 5454 5455 if (bfqq && bfqd) { 5456 bic_set_bfqq(bic, NULL, is_sync, actuator_idx); 5457 bfq_exit_bfqq(bfqd, bfqq); 5458 } 5459 } 5460 5461 static void bfq_exit_icq(struct io_cq *icq) 5462 { 5463 struct bfq_io_cq *bic = icq_to_bic(icq); 5464 struct bfq_data *bfqd = bic_to_bfqd(bic); 5465 unsigned long flags; 5466 unsigned int act_idx; 5467 /* 5468 * If bfqd and thus bfqd->num_actuators is not available any 5469 * longer, then cycle over all possible per-actuator bfqqs in 5470 * next loop. We rely on bic being zeroed on creation, and 5471 * therefore on its unused per-actuator fields being NULL. 5472 */ 5473 unsigned int num_actuators = BFQ_MAX_ACTUATORS; 5474 struct bfq_iocq_bfqq_data *bfqq_data = bic->bfqq_data; 5475 5476 /* 5477 * bfqd is NULL if scheduler already exited, and in that case 5478 * this is the last time these queues are accessed. 5479 */ 5480 if (bfqd) { 5481 spin_lock_irqsave(&bfqd->lock, flags); 5482 num_actuators = bfqd->num_actuators; 5483 } 5484 5485 for (act_idx = 0; act_idx < num_actuators; act_idx++) { 5486 if (bfqq_data[act_idx].stable_merge_bfqq) 5487 bfq_put_stable_ref(bfqq_data[act_idx].stable_merge_bfqq); 5488 5489 bfq_exit_icq_bfqq(bic, true, act_idx); 5490 bfq_exit_icq_bfqq(bic, false, act_idx); 5491 } 5492 5493 if (bfqd) 5494 spin_unlock_irqrestore(&bfqd->lock, flags); 5495 } 5496 5497 /* 5498 * Update the entity prio values; note that the new values will not 5499 * be used until the next (re)activation. 5500 */ 5501 static void 5502 bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic) 5503 { 5504 struct task_struct *tsk = current; 5505 int ioprio_class; 5506 struct bfq_data *bfqd = bfqq->bfqd; 5507 5508 if (!bfqd) 5509 return; 5510 5511 ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); 5512 switch (ioprio_class) { 5513 default: 5514 pr_err("bdi %s: bfq: bad prio class %d\n", 5515 bdi_dev_name(bfqq->bfqd->queue->disk->bdi), 5516 ioprio_class); 5517 fallthrough; 5518 case IOPRIO_CLASS_NONE: 5519 /* 5520 * No prio set, inherit CPU scheduling settings. 5521 */ 5522 bfqq->new_ioprio = task_nice_ioprio(tsk); 5523 bfqq->new_ioprio_class = task_nice_ioclass(tsk); 5524 break; 5525 case IOPRIO_CLASS_RT: 5526 bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); 5527 bfqq->new_ioprio_class = IOPRIO_CLASS_RT; 5528 break; 5529 case IOPRIO_CLASS_BE: 5530 bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); 5531 bfqq->new_ioprio_class = IOPRIO_CLASS_BE; 5532 break; 5533 case IOPRIO_CLASS_IDLE: 5534 bfqq->new_ioprio_class = IOPRIO_CLASS_IDLE; 5535 bfqq->new_ioprio = 7; 5536 break; 5537 } 5538 5539 if (bfqq->new_ioprio >= IOPRIO_NR_LEVELS) { 5540 pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n", 5541 bfqq->new_ioprio); 5542 bfqq->new_ioprio = IOPRIO_NR_LEVELS - 1; 5543 } 5544 5545 bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio); 5546 bfq_log_bfqq(bfqd, bfqq, "new_ioprio %d new_weight %d", 5547 bfqq->new_ioprio, bfqq->entity.new_weight); 5548 bfqq->entity.prio_changed = 1; 5549 } 5550 5551 static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, 5552 struct bio *bio, bool is_sync, 5553 struct bfq_io_cq *bic, 5554 bool respawn); 5555 5556 static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio) 5557 { 5558 struct bfq_data *bfqd = bic_to_bfqd(bic); 5559 struct bfq_queue *bfqq; 5560 int ioprio = bic->icq.ioc->ioprio; 5561 5562 /* 5563 * This condition may trigger on a newly created bic, be sure to 5564 * drop the lock before returning. 5565 */ 5566 if (unlikely(!bfqd) || likely(bic->ioprio == ioprio)) 5567 return; 5568 5569 bic->ioprio = ioprio; 5570 5571 bfqq = bic_to_bfqq(bic, false, bfq_actuator_index(bfqd, bio)); 5572 if (bfqq) { 5573 struct bfq_queue *old_bfqq = bfqq; 5574 5575 bfqq = bfq_get_queue(bfqd, bio, false, bic, true); 5576 bic_set_bfqq(bic, bfqq, false, bfq_actuator_index(bfqd, bio)); 5577 bfq_release_process_ref(bfqd, old_bfqq); 5578 } 5579 5580 bfqq = bic_to_bfqq(bic, true, bfq_actuator_index(bfqd, bio)); 5581 if (bfqq) 5582 bfq_set_next_ioprio_data(bfqq, bic); 5583 } 5584 5585 static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, 5586 struct bfq_io_cq *bic, pid_t pid, int is_sync, 5587 unsigned int act_idx) 5588 { 5589 u64 now_ns = ktime_get_ns(); 5590 5591 bfqq->actuator_idx = act_idx; 5592 RB_CLEAR_NODE(&bfqq->entity.rb_node); 5593 INIT_LIST_HEAD(&bfqq->fifo); 5594 INIT_HLIST_NODE(&bfqq->burst_list_node); 5595 INIT_HLIST_NODE(&bfqq->woken_list_node); 5596 INIT_HLIST_HEAD(&bfqq->woken_list); 5597 5598 bfqq->ref = 0; 5599 bfqq->bfqd = bfqd; 5600 5601 if (bic) 5602 bfq_set_next_ioprio_data(bfqq, bic); 5603 5604 if (is_sync) { 5605 /* 5606 * No need to mark as has_short_ttime if in 5607 * idle_class, because no device idling is performed 5608 * for queues in idle class 5609 */ 5610 if (!bfq_class_idle(bfqq)) 5611 /* tentatively mark as has_short_ttime */ 5612 bfq_mark_bfqq_has_short_ttime(bfqq); 5613 bfq_mark_bfqq_sync(bfqq); 5614 bfq_mark_bfqq_just_created(bfqq); 5615 } else 5616 bfq_clear_bfqq_sync(bfqq); 5617 5618 /* set end request to minus infinity from now */ 5619 bfqq->ttime.last_end_request = now_ns + 1; 5620 5621 bfqq->creation_time = jiffies; 5622 5623 bfqq->io_start_time = now_ns; 5624 5625 bfq_mark_bfqq_IO_bound(bfqq); 5626 5627 bfqq->pid = pid; 5628 5629 /* Tentative initial value to trade off between thr and lat */ 5630 bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3; 5631 bfqq->budget_timeout = bfq_smallest_from_now(); 5632 5633 bfqq->wr_coeff = 1; 5634 bfqq->last_wr_start_finish = jiffies; 5635 bfqq->wr_start_at_switch_to_srt = bfq_smallest_from_now(); 5636 bfqq->split_time = bfq_smallest_from_now(); 5637 5638 /* 5639 * To not forget the possibly high bandwidth consumed by a 5640 * process/queue in the recent past, 5641 * bfq_bfqq_softrt_next_start() returns a value at least equal 5642 * to the current value of bfqq->soft_rt_next_start (see 5643 * comments on bfq_bfqq_softrt_next_start). Set 5644 * soft_rt_next_start to now, to mean that bfqq has consumed 5645 * no bandwidth so far. 5646 */ 5647 bfqq->soft_rt_next_start = jiffies; 5648 5649 /* first request is almost certainly seeky */ 5650 bfqq->seek_history = 1; 5651 5652 bfqq->decrease_time_jif = jiffies; 5653 } 5654 5655 static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd, 5656 struct bfq_group *bfqg, 5657 int ioprio_class, int ioprio, int act_idx) 5658 { 5659 switch (ioprio_class) { 5660 case IOPRIO_CLASS_RT: 5661 return &bfqg->async_bfqq[0][ioprio][act_idx]; 5662 case IOPRIO_CLASS_NONE: 5663 ioprio = IOPRIO_BE_NORM; 5664 fallthrough; 5665 case IOPRIO_CLASS_BE: 5666 return &bfqg->async_bfqq[1][ioprio][act_idx]; 5667 case IOPRIO_CLASS_IDLE: 5668 return &bfqg->async_idle_bfqq[act_idx]; 5669 default: 5670 return NULL; 5671 } 5672 } 5673 5674 static struct bfq_queue * 5675 bfq_do_early_stable_merge(struct bfq_data *bfqd, struct bfq_queue *bfqq, 5676 struct bfq_io_cq *bic, 5677 struct bfq_queue *last_bfqq_created) 5678 { 5679 unsigned int a_idx = last_bfqq_created->actuator_idx; 5680 struct bfq_queue *new_bfqq = 5681 bfq_setup_merge(bfqq, last_bfqq_created); 5682 5683 if (!new_bfqq) 5684 return bfqq; 5685 5686 if (new_bfqq->bic) 5687 new_bfqq->bic->bfqq_data[a_idx].stably_merged = true; 5688 bic->bfqq_data[a_idx].stably_merged = true; 5689 5690 /* 5691 * Reusing merge functions. This implies that 5692 * bfqq->bic must be set too, for 5693 * bfq_merge_bfqqs to correctly save bfqq's 5694 * state before killing it. 5695 */ 5696 bfqq->bic = bic; 5697 bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq); 5698 5699 return new_bfqq; 5700 } 5701 5702 /* 5703 * Many throughput-sensitive workloads are made of several parallel 5704 * I/O flows, with all flows generated by the same application, or 5705 * more generically by the same task (e.g., system boot). The most 5706 * counterproductive action with these workloads is plugging I/O 5707 * dispatch when one of the bfq_queues associated with these flows 5708 * remains temporarily empty. 5709 * 5710 * To avoid this plugging, BFQ has been using a burst-handling 5711 * mechanism for years now. This mechanism has proven effective for 5712 * throughput, and not detrimental for service guarantees. The 5713 * following function pushes this mechanism a little bit further, 5714 * basing on the following two facts. 5715 * 5716 * First, all the I/O flows of a the same application or task 5717 * contribute to the execution/completion of that common application 5718 * or task. So the performance figures that matter are total 5719 * throughput of the flows and task-wide I/O latency. In particular, 5720 * these flows do not need to be protected from each other, in terms 5721 * of individual bandwidth or latency. 5722 * 5723 * Second, the above fact holds regardless of the number of flows. 5724 * 5725 * Putting these two facts together, this commits merges stably the 5726 * bfq_queues associated with these I/O flows, i.e., with the 5727 * processes that generate these IO/ flows, regardless of how many the 5728 * involved processes are. 5729 * 5730 * To decide whether a set of bfq_queues is actually associated with 5731 * the I/O flows of a common application or task, and to merge these 5732 * queues stably, this function operates as follows: given a bfq_queue, 5733 * say Q2, currently being created, and the last bfq_queue, say Q1, 5734 * created before Q2, Q2 is merged stably with Q1 if 5735 * - very little time has elapsed since when Q1 was created 5736 * - Q2 has the same ioprio as Q1 5737 * - Q2 belongs to the same group as Q1 5738 * 5739 * Merging bfq_queues also reduces scheduling overhead. A fio test 5740 * with ten random readers on /dev/nullb shows a throughput boost of 5741 * 40%, with a quadcore. Since BFQ's execution time amounts to ~50% of 5742 * the total per-request processing time, the above throughput boost 5743 * implies that BFQ's overhead is reduced by more than 50%. 5744 * 5745 * This new mechanism most certainly obsoletes the current 5746 * burst-handling heuristics. We keep those heuristics for the moment. 5747 */ 5748 static struct bfq_queue *bfq_do_or_sched_stable_merge(struct bfq_data *bfqd, 5749 struct bfq_queue *bfqq, 5750 struct bfq_io_cq *bic) 5751 { 5752 struct bfq_queue **source_bfqq = bfqq->entity.parent ? 5753 &bfqq->entity.parent->last_bfqq_created : 5754 &bfqd->last_bfqq_created; 5755 5756 struct bfq_queue *last_bfqq_created = *source_bfqq; 5757 5758 /* 5759 * If last_bfqq_created has not been set yet, then init it. If 5760 * it has been set already, but too long ago, then move it 5761 * forward to bfqq. Finally, move also if bfqq belongs to a 5762 * different group than last_bfqq_created, or if bfqq has a 5763 * different ioprio, ioprio_class or actuator_idx. If none of 5764 * these conditions holds true, then try an early stable merge 5765 * or schedule a delayed stable merge. As for the condition on 5766 * actuator_idx, the reason is that, if queues associated with 5767 * different actuators are merged, then control is lost on 5768 * each actuator. Therefore some actuator may be 5769 * underutilized, and throughput may decrease. 5770 * 5771 * A delayed merge is scheduled (instead of performing an 5772 * early merge), in case bfqq might soon prove to be more 5773 * throughput-beneficial if not merged. Currently this is 5774 * possible only if bfqd is rotational with no queueing. For 5775 * such a drive, not merging bfqq is better for throughput if 5776 * bfqq happens to contain sequential I/O. So, we wait a 5777 * little bit for enough I/O to flow through bfqq. After that, 5778 * if such an I/O is sequential, then the merge is 5779 * canceled. Otherwise the merge is finally performed. 5780 */ 5781 if (!last_bfqq_created || 5782 time_before(last_bfqq_created->creation_time + 5783 msecs_to_jiffies(bfq_activation_stable_merging), 5784 bfqq->creation_time) || 5785 bfqq->entity.parent != last_bfqq_created->entity.parent || 5786 bfqq->ioprio != last_bfqq_created->ioprio || 5787 bfqq->ioprio_class != last_bfqq_created->ioprio_class || 5788 bfqq->actuator_idx != last_bfqq_created->actuator_idx) 5789 *source_bfqq = bfqq; 5790 else if (time_after_eq(last_bfqq_created->creation_time + 5791 bfqd->bfq_burst_interval, 5792 bfqq->creation_time)) { 5793 if (likely(bfqd->nonrot_with_queueing)) 5794 /* 5795 * With this type of drive, leaving 5796 * bfqq alone may provide no 5797 * throughput benefits compared with 5798 * merging bfqq. So merge bfqq now. 5799 */ 5800 bfqq = bfq_do_early_stable_merge(bfqd, bfqq, 5801 bic, 5802 last_bfqq_created); 5803 else { /* schedule tentative stable merge */ 5804 /* 5805 * get reference on last_bfqq_created, 5806 * to prevent it from being freed, 5807 * until we decide whether to merge 5808 */ 5809 last_bfqq_created->ref++; 5810 /* 5811 * need to keep track of stable refs, to 5812 * compute process refs correctly 5813 */ 5814 last_bfqq_created->stable_ref++; 5815 /* 5816 * Record the bfqq to merge to. 5817 */ 5818 bic->bfqq_data[last_bfqq_created->actuator_idx].stable_merge_bfqq = 5819 last_bfqq_created; 5820 } 5821 } 5822 5823 return bfqq; 5824 } 5825 5826 5827 static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, 5828 struct bio *bio, bool is_sync, 5829 struct bfq_io_cq *bic, 5830 bool respawn) 5831 { 5832 const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio); 5833 const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); 5834 struct bfq_queue **async_bfqq = NULL; 5835 struct bfq_queue *bfqq; 5836 struct bfq_group *bfqg; 5837 5838 bfqg = bfq_bio_bfqg(bfqd, bio); 5839 if (!is_sync) { 5840 async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class, 5841 ioprio, 5842 bfq_actuator_index(bfqd, bio)); 5843 bfqq = *async_bfqq; 5844 if (bfqq) 5845 goto out; 5846 } 5847 5848 bfqq = kmem_cache_alloc_node(bfq_pool, 5849 GFP_NOWAIT | __GFP_ZERO | __GFP_NOWARN, 5850 bfqd->queue->node); 5851 5852 if (bfqq) { 5853 bfq_init_bfqq(bfqd, bfqq, bic, current->pid, 5854 is_sync, bfq_actuator_index(bfqd, bio)); 5855 bfq_init_entity(&bfqq->entity, bfqg); 5856 bfq_log_bfqq(bfqd, bfqq, "allocated"); 5857 } else { 5858 bfqq = &bfqd->oom_bfqq; 5859 bfq_log_bfqq(bfqd, bfqq, "using oom bfqq"); 5860 goto out; 5861 } 5862 5863 /* 5864 * Pin the queue now that it's allocated, scheduler exit will 5865 * prune it. 5866 */ 5867 if (async_bfqq) { 5868 bfqq->ref++; /* 5869 * Extra group reference, w.r.t. sync 5870 * queue. This extra reference is removed 5871 * only if bfqq->bfqg disappears, to 5872 * guarantee that this queue is not freed 5873 * until its group goes away. 5874 */ 5875 bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d", 5876 bfqq, bfqq->ref); 5877 *async_bfqq = bfqq; 5878 } 5879 5880 out: 5881 bfqq->ref++; /* get a process reference to this queue */ 5882 5883 if (bfqq != &bfqd->oom_bfqq && is_sync && !respawn) 5884 bfqq = bfq_do_or_sched_stable_merge(bfqd, bfqq, bic); 5885 return bfqq; 5886 } 5887 5888 static void bfq_update_io_thinktime(struct bfq_data *bfqd, 5889 struct bfq_queue *bfqq) 5890 { 5891 struct bfq_ttime *ttime = &bfqq->ttime; 5892 u64 elapsed; 5893 5894 /* 5895 * We are really interested in how long it takes for the queue to 5896 * become busy when there is no outstanding IO for this queue. So 5897 * ignore cases when the bfq queue has already IO queued. 5898 */ 5899 if (bfqq->dispatched || bfq_bfqq_busy(bfqq)) 5900 return; 5901 elapsed = ktime_get_ns() - bfqq->ttime.last_end_request; 5902 elapsed = min_t(u64, elapsed, 2ULL * bfqd->bfq_slice_idle); 5903 5904 ttime->ttime_samples = (7*ttime->ttime_samples + 256) / 8; 5905 ttime->ttime_total = div_u64(7*ttime->ttime_total + 256*elapsed, 8); 5906 ttime->ttime_mean = div64_ul(ttime->ttime_total + 128, 5907 ttime->ttime_samples); 5908 } 5909 5910 static void 5911 bfq_update_io_seektime(struct bfq_data *bfqd, struct bfq_queue *bfqq, 5912 struct request *rq) 5913 { 5914 bfqq->seek_history <<= 1; 5915 bfqq->seek_history |= BFQ_RQ_SEEKY(bfqd, bfqq->last_request_pos, rq); 5916 5917 if (bfqq->wr_coeff > 1 && 5918 bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && 5919 BFQQ_TOTALLY_SEEKY(bfqq)) { 5920 if (time_is_before_jiffies(bfqq->wr_start_at_switch_to_srt + 5921 bfq_wr_duration(bfqd))) { 5922 /* 5923 * In soft_rt weight raising with the 5924 * interactive-weight-raising period 5925 * elapsed (so no switch back to 5926 * interactive weight raising). 5927 */ 5928 bfq_bfqq_end_wr(bfqq); 5929 } else { /* 5930 * stopping soft_rt weight raising 5931 * while still in interactive period, 5932 * switch back to interactive weight 5933 * raising 5934 */ 5935 switch_back_to_interactive_wr(bfqq, bfqd); 5936 bfqq->entity.prio_changed = 1; 5937 } 5938 } 5939 } 5940 5941 static void bfq_update_has_short_ttime(struct bfq_data *bfqd, 5942 struct bfq_queue *bfqq, 5943 struct bfq_io_cq *bic) 5944 { 5945 bool has_short_ttime = true, state_changed; 5946 5947 /* 5948 * No need to update has_short_ttime if bfqq is async or in 5949 * idle io prio class, or if bfq_slice_idle is zero, because 5950 * no device idling is performed for bfqq in this case. 5951 */ 5952 if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq) || 5953 bfqd->bfq_slice_idle == 0) 5954 return; 5955 5956 /* Idle window just restored, statistics are meaningless. */ 5957 if (time_is_after_eq_jiffies(bfqq->split_time + 5958 bfqd->bfq_wr_min_idle_time)) 5959 return; 5960 5961 /* Think time is infinite if no process is linked to 5962 * bfqq. Otherwise check average think time to decide whether 5963 * to mark as has_short_ttime. To this goal, compare average 5964 * think time with half the I/O-plugging timeout. 5965 */ 5966 if (atomic_read(&bic->icq.ioc->active_ref) == 0 || 5967 (bfq_sample_valid(bfqq->ttime.ttime_samples) && 5968 bfqq->ttime.ttime_mean > bfqd->bfq_slice_idle>>1)) 5969 has_short_ttime = false; 5970 5971 state_changed = has_short_ttime != bfq_bfqq_has_short_ttime(bfqq); 5972 5973 if (has_short_ttime) 5974 bfq_mark_bfqq_has_short_ttime(bfqq); 5975 else 5976 bfq_clear_bfqq_has_short_ttime(bfqq); 5977 5978 /* 5979 * Until the base value for the total service time gets 5980 * finally computed for bfqq, the inject limit does depend on 5981 * the think-time state (short|long). In particular, the limit 5982 * is 0 or 1 if the think time is deemed, respectively, as 5983 * short or long (details in the comments in 5984 * bfq_update_inject_limit()). Accordingly, the next 5985 * instructions reset the inject limit if the think-time state 5986 * has changed and the above base value is still to be 5987 * computed. 5988 * 5989 * However, the reset is performed only if more than 100 ms 5990 * have elapsed since the last update of the inject limit, or 5991 * (inclusive) if the change is from short to long think 5992 * time. The reason for this waiting is as follows. 5993 * 5994 * bfqq may have a long think time because of a 5995 * synchronization with some other queue, i.e., because the 5996 * I/O of some other queue may need to be completed for bfqq 5997 * to receive new I/O. Details in the comments on the choice 5998 * of the queue for injection in bfq_select_queue(). 5999 * 6000 * As stressed in those comments, if such a synchronization is 6001 * actually in place, then, without injection on bfqq, the 6002 * blocking I/O cannot happen to served while bfqq is in 6003 * service. As a consequence, if bfqq is granted 6004 * I/O-dispatch-plugging, then bfqq remains empty, and no I/O 6005 * is dispatched, until the idle timeout fires. This is likely 6006 * to result in lower bandwidth and higher latencies for bfqq, 6007 * and in a severe loss of total throughput. 6008 * 6009 * On the opposite end, a non-zero inject limit may allow the 6010 * I/O that blocks bfqq to be executed soon, and therefore 6011 * bfqq to receive new I/O soon. 6012 * 6013 * But, if the blocking gets actually eliminated, then the 6014 * next think-time sample for bfqq may be very low. This in 6015 * turn may cause bfqq's think time to be deemed 6016 * short. Without the 100 ms barrier, this new state change 6017 * would cause the body of the next if to be executed 6018 * immediately. But this would set to 0 the inject 6019 * limit. Without injection, the blocking I/O would cause the 6020 * think time of bfqq to become long again, and therefore the 6021 * inject limit to be raised again, and so on. The only effect 6022 * of such a steady oscillation between the two think-time 6023 * states would be to prevent effective injection on bfqq. 6024 * 6025 * In contrast, if the inject limit is not reset during such a 6026 * long time interval as 100 ms, then the number of short 6027 * think time samples can grow significantly before the reset 6028 * is performed. As a consequence, the think time state can 6029 * become stable before the reset. Therefore there will be no 6030 * state change when the 100 ms elapse, and no reset of the 6031 * inject limit. The inject limit remains steadily equal to 1 6032 * both during and after the 100 ms. So injection can be 6033 * performed at all times, and throughput gets boosted. 6034 * 6035 * An inject limit equal to 1 is however in conflict, in 6036 * general, with the fact that the think time of bfqq is 6037 * short, because injection may be likely to delay bfqq's I/O 6038 * (as explained in the comments in 6039 * bfq_update_inject_limit()). But this does not happen in 6040 * this special case, because bfqq's low think time is due to 6041 * an effective handling of a synchronization, through 6042 * injection. In this special case, bfqq's I/O does not get 6043 * delayed by injection; on the contrary, bfqq's I/O is 6044 * brought forward, because it is not blocked for 6045 * milliseconds. 6046 * 6047 * In addition, serving the blocking I/O much sooner, and much 6048 * more frequently than once per I/O-plugging timeout, makes 6049 * it much quicker to detect a waker queue (the concept of 6050 * waker queue is defined in the comments in 6051 * bfq_add_request()). This makes it possible to start sooner 6052 * to boost throughput more effectively, by injecting the I/O 6053 * of the waker queue unconditionally on every 6054 * bfq_dispatch_request(). 6055 * 6056 * One last, important benefit of not resetting the inject 6057 * limit before 100 ms is that, during this time interval, the 6058 * base value for the total service time is likely to get 6059 * finally computed for bfqq, freeing the inject limit from 6060 * its relation with the think time. 6061 */ 6062 if (state_changed && bfqq->last_serv_time_ns == 0 && 6063 (time_is_before_eq_jiffies(bfqq->decrease_time_jif + 6064 msecs_to_jiffies(100)) || 6065 !has_short_ttime)) 6066 bfq_reset_inject_limit(bfqd, bfqq); 6067 } 6068 6069 /* 6070 * Called when a new fs request (rq) is added to bfqq. Check if there's 6071 * something we should do about it. 6072 */ 6073 static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq, 6074 struct request *rq) 6075 { 6076 if (rq->cmd_flags & REQ_META) 6077 bfqq->meta_pending++; 6078 6079 bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq); 6080 6081 if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) { 6082 bool small_req = bfqq->queued[rq_is_sync(rq)] == 1 && 6083 blk_rq_sectors(rq) < 32; 6084 bool budget_timeout = bfq_bfqq_budget_timeout(bfqq); 6085 6086 /* 6087 * There is just this request queued: if 6088 * - the request is small, and 6089 * - we are idling to boost throughput, and 6090 * - the queue is not to be expired, 6091 * then just exit. 6092 * 6093 * In this way, if the device is being idled to wait 6094 * for a new request from the in-service queue, we 6095 * avoid unplugging the device and committing the 6096 * device to serve just a small request. In contrast 6097 * we wait for the block layer to decide when to 6098 * unplug the device: hopefully, new requests will be 6099 * merged to this one quickly, then the device will be 6100 * unplugged and larger requests will be dispatched. 6101 */ 6102 if (small_req && idling_boosts_thr_without_issues(bfqd, bfqq) && 6103 !budget_timeout) 6104 return; 6105 6106 /* 6107 * A large enough request arrived, or idling is being 6108 * performed to preserve service guarantees, or 6109 * finally the queue is to be expired: in all these 6110 * cases disk idling is to be stopped, so clear 6111 * wait_request flag and reset timer. 6112 */ 6113 bfq_clear_bfqq_wait_request(bfqq); 6114 hrtimer_try_to_cancel(&bfqd->idle_slice_timer); 6115 6116 /* 6117 * The queue is not empty, because a new request just 6118 * arrived. Hence we can safely expire the queue, in 6119 * case of budget timeout, without risking that the 6120 * timestamps of the queue are not updated correctly. 6121 * See [1] for more details. 6122 */ 6123 if (budget_timeout) 6124 bfq_bfqq_expire(bfqd, bfqq, false, 6125 BFQQE_BUDGET_TIMEOUT); 6126 } 6127 } 6128 6129 static void bfqq_request_allocated(struct bfq_queue *bfqq) 6130 { 6131 struct bfq_entity *entity = &bfqq->entity; 6132 6133 for_each_entity(entity) 6134 entity->allocated++; 6135 } 6136 6137 static void bfqq_request_freed(struct bfq_queue *bfqq) 6138 { 6139 struct bfq_entity *entity = &bfqq->entity; 6140 6141 for_each_entity(entity) 6142 entity->allocated--; 6143 } 6144 6145 /* returns true if it causes the idle timer to be disabled */ 6146 static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) 6147 { 6148 struct bfq_queue *bfqq = RQ_BFQQ(rq), 6149 *new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true, 6150 RQ_BIC(rq)); 6151 bool waiting, idle_timer_disabled = false; 6152 6153 if (new_bfqq) { 6154 /* 6155 * Release the request's reference to the old bfqq 6156 * and make sure one is taken to the shared queue. 6157 */ 6158 bfqq_request_allocated(new_bfqq); 6159 bfqq_request_freed(bfqq); 6160 new_bfqq->ref++; 6161 /* 6162 * If the bic associated with the process 6163 * issuing this request still points to bfqq 6164 * (and thus has not been already redirected 6165 * to new_bfqq or even some other bfq_queue), 6166 * then complete the merge and redirect it to 6167 * new_bfqq. 6168 */ 6169 if (bic_to_bfqq(RQ_BIC(rq), true, 6170 bfq_actuator_index(bfqd, rq->bio)) == bfqq) 6171 bfq_merge_bfqqs(bfqd, RQ_BIC(rq), 6172 bfqq, new_bfqq); 6173 6174 bfq_clear_bfqq_just_created(bfqq); 6175 /* 6176 * rq is about to be enqueued into new_bfqq, 6177 * release rq reference on bfqq 6178 */ 6179 bfq_put_queue(bfqq); 6180 rq->elv.priv[1] = new_bfqq; 6181 bfqq = new_bfqq; 6182 } 6183 6184 bfq_update_io_thinktime(bfqd, bfqq); 6185 bfq_update_has_short_ttime(bfqd, bfqq, RQ_BIC(rq)); 6186 bfq_update_io_seektime(bfqd, bfqq, rq); 6187 6188 waiting = bfqq && bfq_bfqq_wait_request(bfqq); 6189 bfq_add_request(rq); 6190 idle_timer_disabled = waiting && !bfq_bfqq_wait_request(bfqq); 6191 6192 rq->fifo_time = ktime_get_ns() + bfqd->bfq_fifo_expire[rq_is_sync(rq)]; 6193 list_add_tail(&rq->queuelist, &bfqq->fifo); 6194 6195 bfq_rq_enqueued(bfqd, bfqq, rq); 6196 6197 return idle_timer_disabled; 6198 } 6199 6200 #ifdef CONFIG_BFQ_CGROUP_DEBUG 6201 static void bfq_update_insert_stats(struct request_queue *q, 6202 struct bfq_queue *bfqq, 6203 bool idle_timer_disabled, 6204 blk_opf_t cmd_flags) 6205 { 6206 if (!bfqq) 6207 return; 6208 6209 /* 6210 * bfqq still exists, because it can disappear only after 6211 * either it is merged with another queue, or the process it 6212 * is associated with exits. But both actions must be taken by 6213 * the same process currently executing this flow of 6214 * instructions. 6215 * 6216 * In addition, the following queue lock guarantees that 6217 * bfqq_group(bfqq) exists as well. 6218 */ 6219 spin_lock_irq(&q->queue_lock); 6220 bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags); 6221 if (idle_timer_disabled) 6222 bfqg_stats_update_idle_time(bfqq_group(bfqq)); 6223 spin_unlock_irq(&q->queue_lock); 6224 } 6225 #else 6226 static inline void bfq_update_insert_stats(struct request_queue *q, 6227 struct bfq_queue *bfqq, 6228 bool idle_timer_disabled, 6229 blk_opf_t cmd_flags) {} 6230 #endif /* CONFIG_BFQ_CGROUP_DEBUG */ 6231 6232 static struct bfq_queue *bfq_init_rq(struct request *rq); 6233 6234 static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, 6235 bool at_head) 6236 { 6237 struct request_queue *q = hctx->queue; 6238 struct bfq_data *bfqd = q->elevator->elevator_data; 6239 struct bfq_queue *bfqq; 6240 bool idle_timer_disabled = false; 6241 blk_opf_t cmd_flags; 6242 LIST_HEAD(free); 6243 6244 #ifdef CONFIG_BFQ_GROUP_IOSCHED 6245 if (!cgroup_subsys_on_dfl(io_cgrp_subsys) && rq->bio) 6246 bfqg_stats_update_legacy_io(q, rq); 6247 #endif 6248 spin_lock_irq(&bfqd->lock); 6249 bfqq = bfq_init_rq(rq); 6250 if (blk_mq_sched_try_insert_merge(q, rq, &free)) { 6251 spin_unlock_irq(&bfqd->lock); 6252 blk_mq_free_requests(&free); 6253 return; 6254 } 6255 6256 trace_block_rq_insert(rq); 6257 6258 if (!bfqq || at_head) { 6259 if (at_head) 6260 list_add(&rq->queuelist, &bfqd->dispatch); 6261 else 6262 list_add_tail(&rq->queuelist, &bfqd->dispatch); 6263 } else { 6264 idle_timer_disabled = __bfq_insert_request(bfqd, rq); 6265 /* 6266 * Update bfqq, because, if a queue merge has occurred 6267 * in __bfq_insert_request, then rq has been 6268 * redirected into a new queue. 6269 */ 6270 bfqq = RQ_BFQQ(rq); 6271 6272 if (rq_mergeable(rq)) { 6273 elv_rqhash_add(q, rq); 6274 if (!q->last_merge) 6275 q->last_merge = rq; 6276 } 6277 } 6278 6279 /* 6280 * Cache cmd_flags before releasing scheduler lock, because rq 6281 * may disappear afterwards (for example, because of a request 6282 * merge). 6283 */ 6284 cmd_flags = rq->cmd_flags; 6285 spin_unlock_irq(&bfqd->lock); 6286 6287 bfq_update_insert_stats(q, bfqq, idle_timer_disabled, 6288 cmd_flags); 6289 } 6290 6291 static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx, 6292 struct list_head *list, bool at_head) 6293 { 6294 while (!list_empty(list)) { 6295 struct request *rq; 6296 6297 rq = list_first_entry(list, struct request, queuelist); 6298 list_del_init(&rq->queuelist); 6299 bfq_insert_request(hctx, rq, at_head); 6300 } 6301 } 6302 6303 static void bfq_update_hw_tag(struct bfq_data *bfqd) 6304 { 6305 struct bfq_queue *bfqq = bfqd->in_service_queue; 6306 6307 bfqd->max_rq_in_driver = max_t(int, bfqd->max_rq_in_driver, 6308 bfqd->tot_rq_in_driver); 6309 6310 if (bfqd->hw_tag == 1) 6311 return; 6312 6313 /* 6314 * This sample is valid if the number of outstanding requests 6315 * is large enough to allow a queueing behavior. Note that the 6316 * sum is not exact, as it's not taking into account deactivated 6317 * requests. 6318 */ 6319 if (bfqd->tot_rq_in_driver + bfqd->queued <= BFQ_HW_QUEUE_THRESHOLD) 6320 return; 6321 6322 /* 6323 * If active queue hasn't enough requests and can idle, bfq might not 6324 * dispatch sufficient requests to hardware. Don't zero hw_tag in this 6325 * case 6326 */ 6327 if (bfqq && bfq_bfqq_has_short_ttime(bfqq) && 6328 bfqq->dispatched + bfqq->queued[0] + bfqq->queued[1] < 6329 BFQ_HW_QUEUE_THRESHOLD && 6330 bfqd->tot_rq_in_driver < BFQ_HW_QUEUE_THRESHOLD) 6331 return; 6332 6333 if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES) 6334 return; 6335 6336 bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD; 6337 bfqd->max_rq_in_driver = 0; 6338 bfqd->hw_tag_samples = 0; 6339 6340 bfqd->nonrot_with_queueing = 6341 blk_queue_nonrot(bfqd->queue) && bfqd->hw_tag; 6342 } 6343 6344 static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) 6345 { 6346 u64 now_ns; 6347 u32 delta_us; 6348 6349 bfq_update_hw_tag(bfqd); 6350 6351 bfqd->rq_in_driver[bfqq->actuator_idx]--; 6352 bfqd->tot_rq_in_driver--; 6353 bfqq->dispatched--; 6354 6355 if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) { 6356 /* 6357 * Set budget_timeout (which we overload to store the 6358 * time at which the queue remains with no backlog and 6359 * no outstanding request; used by the weight-raising 6360 * mechanism). 6361 */ 6362 bfqq->budget_timeout = jiffies; 6363 6364 bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); 6365 bfq_weights_tree_remove(bfqq); 6366 } 6367 6368 now_ns = ktime_get_ns(); 6369 6370 bfqq->ttime.last_end_request = now_ns; 6371 6372 /* 6373 * Using us instead of ns, to get a reasonable precision in 6374 * computing rate in next check. 6375 */ 6376 delta_us = div_u64(now_ns - bfqd->last_completion, NSEC_PER_USEC); 6377 6378 /* 6379 * If the request took rather long to complete, and, according 6380 * to the maximum request size recorded, this completion latency 6381 * implies that the request was certainly served at a very low 6382 * rate (less than 1M sectors/sec), then the whole observation 6383 * interval that lasts up to this time instant cannot be a 6384 * valid time interval for computing a new peak rate. Invoke 6385 * bfq_update_rate_reset to have the following three steps 6386 * taken: 6387 * - close the observation interval at the last (previous) 6388 * request dispatch or completion 6389 * - compute rate, if possible, for that observation interval 6390 * - reset to zero samples, which will trigger a proper 6391 * re-initialization of the observation interval on next 6392 * dispatch 6393 */ 6394 if (delta_us > BFQ_MIN_TT/NSEC_PER_USEC && 6395 (bfqd->last_rq_max_size<<BFQ_RATE_SHIFT)/delta_us < 6396 1UL<<(BFQ_RATE_SHIFT - 10)) 6397 bfq_update_rate_reset(bfqd, NULL); 6398 bfqd->last_completion = now_ns; 6399 /* 6400 * Shared queues are likely to receive I/O at a high 6401 * rate. This may deceptively let them be considered as wakers 6402 * of other queues. But a false waker will unjustly steal 6403 * bandwidth to its supposedly woken queue. So considering 6404 * also shared queues in the waking mechanism may cause more 6405 * control troubles than throughput benefits. Then reset 6406 * last_completed_rq_bfqq if bfqq is a shared queue. 6407 */ 6408 if (!bfq_bfqq_coop(bfqq)) 6409 bfqd->last_completed_rq_bfqq = bfqq; 6410 else 6411 bfqd->last_completed_rq_bfqq = NULL; 6412 6413 /* 6414 * If we are waiting to discover whether the request pattern 6415 * of the task associated with the queue is actually 6416 * isochronous, and both requisites for this condition to hold 6417 * are now satisfied, then compute soft_rt_next_start (see the 6418 * comments on the function bfq_bfqq_softrt_next_start()). We 6419 * do not compute soft_rt_next_start if bfqq is in interactive 6420 * weight raising (see the comments in bfq_bfqq_expire() for 6421 * an explanation). We schedule this delayed update when bfqq 6422 * expires, if it still has in-flight requests. 6423 */ 6424 if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 && 6425 RB_EMPTY_ROOT(&bfqq->sort_list) && 6426 bfqq->wr_coeff != bfqd->bfq_wr_coeff) 6427 bfqq->soft_rt_next_start = 6428 bfq_bfqq_softrt_next_start(bfqd, bfqq); 6429 6430 /* 6431 * If this is the in-service queue, check if it needs to be expired, 6432 * or if we want to idle in case it has no pending requests. 6433 */ 6434 if (bfqd->in_service_queue == bfqq) { 6435 if (bfq_bfqq_must_idle(bfqq)) { 6436 if (bfqq->dispatched == 0) 6437 bfq_arm_slice_timer(bfqd); 6438 /* 6439 * If we get here, we do not expire bfqq, even 6440 * if bfqq was in budget timeout or had no 6441 * more requests (as controlled in the next 6442 * conditional instructions). The reason for 6443 * not expiring bfqq is as follows. 6444 * 6445 * Here bfqq->dispatched > 0 holds, but 6446 * bfq_bfqq_must_idle() returned true. This 6447 * implies that, even if no request arrives 6448 * for bfqq before bfqq->dispatched reaches 0, 6449 * bfqq will, however, not be expired on the 6450 * completion event that causes bfqq->dispatch 6451 * to reach zero. In contrast, on this event, 6452 * bfqq will start enjoying device idling 6453 * (I/O-dispatch plugging). 6454 * 6455 * But, if we expired bfqq here, bfqq would 6456 * not have the chance to enjoy device idling 6457 * when bfqq->dispatched finally reaches 6458 * zero. This would expose bfqq to violation 6459 * of its reserved service guarantees. 6460 */ 6461 return; 6462 } else if (bfq_may_expire_for_budg_timeout(bfqq)) 6463 bfq_bfqq_expire(bfqd, bfqq, false, 6464 BFQQE_BUDGET_TIMEOUT); 6465 else if (RB_EMPTY_ROOT(&bfqq->sort_list) && 6466 (bfqq->dispatched == 0 || 6467 !bfq_better_to_idle(bfqq))) 6468 bfq_bfqq_expire(bfqd, bfqq, false, 6469 BFQQE_NO_MORE_REQUESTS); 6470 } 6471 6472 if (!bfqd->tot_rq_in_driver) 6473 bfq_schedule_dispatch(bfqd); 6474 } 6475 6476 /* 6477 * The processes associated with bfqq may happen to generate their 6478 * cumulative I/O at a lower rate than the rate at which the device 6479 * could serve the same I/O. This is rather probable, e.g., if only 6480 * one process is associated with bfqq and the device is an SSD. It 6481 * results in bfqq becoming often empty while in service. In this 6482 * respect, if BFQ is allowed to switch to another queue when bfqq 6483 * remains empty, then the device goes on being fed with I/O requests, 6484 * and the throughput is not affected. In contrast, if BFQ is not 6485 * allowed to switch to another queue---because bfqq is sync and 6486 * I/O-dispatch needs to be plugged while bfqq is temporarily 6487 * empty---then, during the service of bfqq, there will be frequent 6488 * "service holes", i.e., time intervals during which bfqq gets empty 6489 * and the device can only consume the I/O already queued in its 6490 * hardware queues. During service holes, the device may even get to 6491 * remaining idle. In the end, during the service of bfqq, the device 6492 * is driven at a lower speed than the one it can reach with the kind 6493 * of I/O flowing through bfqq. 6494 * 6495 * To counter this loss of throughput, BFQ implements a "request 6496 * injection mechanism", which tries to fill the above service holes 6497 * with I/O requests taken from other queues. The hard part in this 6498 * mechanism is finding the right amount of I/O to inject, so as to 6499 * both boost throughput and not break bfqq's bandwidth and latency 6500 * guarantees. In this respect, the mechanism maintains a per-queue 6501 * inject limit, computed as below. While bfqq is empty, the injection 6502 * mechanism dispatches extra I/O requests only until the total number 6503 * of I/O requests in flight---i.e., already dispatched but not yet 6504 * completed---remains lower than this limit. 6505 * 6506 * A first definition comes in handy to introduce the algorithm by 6507 * which the inject limit is computed. We define as first request for 6508 * bfqq, an I/O request for bfqq that arrives while bfqq is in 6509 * service, and causes bfqq to switch from empty to non-empty. The 6510 * algorithm updates the limit as a function of the effect of 6511 * injection on the service times of only the first requests of 6512 * bfqq. The reason for this restriction is that these are the 6513 * requests whose service time is affected most, because they are the 6514 * first to arrive after injection possibly occurred. 6515 * 6516 * To evaluate the effect of injection, the algorithm measures the 6517 * "total service time" of first requests. We define as total service 6518 * time of an I/O request, the time that elapses since when the 6519 * request is enqueued into bfqq, to when it is completed. This 6520 * quantity allows the whole effect of injection to be measured. It is 6521 * easy to see why. Suppose that some requests of other queues are 6522 * actually injected while bfqq is empty, and that a new request R 6523 * then arrives for bfqq. If the device does start to serve all or 6524 * part of the injected requests during the service hole, then, 6525 * because of this extra service, it may delay the next invocation of 6526 * the dispatch hook of BFQ. Then, even after R gets eventually 6527 * dispatched, the device may delay the actual service of R if it is 6528 * still busy serving the extra requests, or if it decides to serve, 6529 * before R, some extra request still present in its queues. As a 6530 * conclusion, the cumulative extra delay caused by injection can be 6531 * easily evaluated by just comparing the total service time of first 6532 * requests with and without injection. 6533 * 6534 * The limit-update algorithm works as follows. On the arrival of a 6535 * first request of bfqq, the algorithm measures the total time of the 6536 * request only if one of the three cases below holds, and, for each 6537 * case, it updates the limit as described below: 6538 * 6539 * (1) If there is no in-flight request. This gives a baseline for the 6540 * total service time of the requests of bfqq. If the baseline has 6541 * not been computed yet, then, after computing it, the limit is 6542 * set to 1, to start boosting throughput, and to prepare the 6543 * ground for the next case. If the baseline has already been 6544 * computed, then it is updated, in case it results to be lower 6545 * than the previous value. 6546 * 6547 * (2) If the limit is higher than 0 and there are in-flight 6548 * requests. By comparing the total service time in this case with 6549 * the above baseline, it is possible to know at which extent the 6550 * current value of the limit is inflating the total service 6551 * time. If the inflation is below a certain threshold, then bfqq 6552 * is assumed to be suffering from no perceivable loss of its 6553 * service guarantees, and the limit is even tentatively 6554 * increased. If the inflation is above the threshold, then the 6555 * limit is decreased. Due to the lack of any hysteresis, this 6556 * logic makes the limit oscillate even in steady workload 6557 * conditions. Yet we opted for it, because it is fast in reaching 6558 * the best value for the limit, as a function of the current I/O 6559 * workload. To reduce oscillations, this step is disabled for a 6560 * short time interval after the limit happens to be decreased. 6561 * 6562 * (3) Periodically, after resetting the limit, to make sure that the 6563 * limit eventually drops in case the workload changes. This is 6564 * needed because, after the limit has gone safely up for a 6565 * certain workload, it is impossible to guess whether the 6566 * baseline total service time may have changed, without measuring 6567 * it again without injection. A more effective version of this 6568 * step might be to just sample the baseline, by interrupting 6569 * injection only once, and then to reset/lower the limit only if 6570 * the total service time with the current limit does happen to be 6571 * too large. 6572 * 6573 * More details on each step are provided in the comments on the 6574 * pieces of code that implement these steps: the branch handling the 6575 * transition from empty to non empty in bfq_add_request(), the branch 6576 * handling injection in bfq_select_queue(), and the function 6577 * bfq_choose_bfqq_for_injection(). These comments also explain some 6578 * exceptions, made by the injection mechanism in some special cases. 6579 */ 6580 static void bfq_update_inject_limit(struct bfq_data *bfqd, 6581 struct bfq_queue *bfqq) 6582 { 6583 u64 tot_time_ns = ktime_get_ns() - bfqd->last_empty_occupied_ns; 6584 unsigned int old_limit = bfqq->inject_limit; 6585 6586 if (bfqq->last_serv_time_ns > 0 && bfqd->rqs_injected) { 6587 u64 threshold = (bfqq->last_serv_time_ns * 3)>>1; 6588 6589 if (tot_time_ns >= threshold && old_limit > 0) { 6590 bfqq->inject_limit--; 6591 bfqq->decrease_time_jif = jiffies; 6592 } else if (tot_time_ns < threshold && 6593 old_limit <= bfqd->max_rq_in_driver) 6594 bfqq->inject_limit++; 6595 } 6596 6597 /* 6598 * Either we still have to compute the base value for the 6599 * total service time, and there seem to be the right 6600 * conditions to do it, or we can lower the last base value 6601 * computed. 6602 * 6603 * NOTE: (bfqd->tot_rq_in_driver == 1) means that there is no I/O 6604 * request in flight, because this function is in the code 6605 * path that handles the completion of a request of bfqq, and, 6606 * in particular, this function is executed before 6607 * bfqd->tot_rq_in_driver is decremented in such a code path. 6608 */ 6609 if ((bfqq->last_serv_time_ns == 0 && bfqd->tot_rq_in_driver == 1) || 6610 tot_time_ns < bfqq->last_serv_time_ns) { 6611 if (bfqq->last_serv_time_ns == 0) { 6612 /* 6613 * Now we certainly have a base value: make sure we 6614 * start trying injection. 6615 */ 6616 bfqq->inject_limit = max_t(unsigned int, 1, old_limit); 6617 } 6618 bfqq->last_serv_time_ns = tot_time_ns; 6619 } else if (!bfqd->rqs_injected && bfqd->tot_rq_in_driver == 1) 6620 /* 6621 * No I/O injected and no request still in service in 6622 * the drive: these are the exact conditions for 6623 * computing the base value of the total service time 6624 * for bfqq. So let's update this value, because it is 6625 * rather variable. For example, it varies if the size 6626 * or the spatial locality of the I/O requests in bfqq 6627 * change. 6628 */ 6629 bfqq->last_serv_time_ns = tot_time_ns; 6630 6631 6632 /* update complete, not waiting for any request completion any longer */ 6633 bfqd->waited_rq = NULL; 6634 bfqd->rqs_injected = false; 6635 } 6636 6637 /* 6638 * Handle either a requeue or a finish for rq. The things to do are 6639 * the same in both cases: all references to rq are to be dropped. In 6640 * particular, rq is considered completed from the point of view of 6641 * the scheduler. 6642 */ 6643 static void bfq_finish_requeue_request(struct request *rq) 6644 { 6645 struct bfq_queue *bfqq = RQ_BFQQ(rq); 6646 struct bfq_data *bfqd; 6647 unsigned long flags; 6648 6649 /* 6650 * rq either is not associated with any icq, or is an already 6651 * requeued request that has not (yet) been re-inserted into 6652 * a bfq_queue. 6653 */ 6654 if (!rq->elv.icq || !bfqq) 6655 return; 6656 6657 bfqd = bfqq->bfqd; 6658 6659 if (rq->rq_flags & RQF_STARTED) 6660 bfqg_stats_update_completion(bfqq_group(bfqq), 6661 rq->start_time_ns, 6662 rq->io_start_time_ns, 6663 rq->cmd_flags); 6664 6665 spin_lock_irqsave(&bfqd->lock, flags); 6666 if (likely(rq->rq_flags & RQF_STARTED)) { 6667 if (rq == bfqd->waited_rq) 6668 bfq_update_inject_limit(bfqd, bfqq); 6669 6670 bfq_completed_request(bfqq, bfqd); 6671 } 6672 bfqq_request_freed(bfqq); 6673 bfq_put_queue(bfqq); 6674 RQ_BIC(rq)->requests--; 6675 spin_unlock_irqrestore(&bfqd->lock, flags); 6676 6677 /* 6678 * Reset private fields. In case of a requeue, this allows 6679 * this function to correctly do nothing if it is spuriously 6680 * invoked again on this same request (see the check at the 6681 * beginning of the function). Probably, a better general 6682 * design would be to prevent blk-mq from invoking the requeue 6683 * or finish hooks of an elevator, for a request that is not 6684 * referred by that elevator. 6685 * 6686 * Resetting the following fields would break the 6687 * request-insertion logic if rq is re-inserted into a bfq 6688 * internal queue, without a re-preparation. Here we assume 6689 * that re-insertions of requeued requests, without 6690 * re-preparation, can happen only for pass_through or at_head 6691 * requests (which are not re-inserted into bfq internal 6692 * queues). 6693 */ 6694 rq->elv.priv[0] = NULL; 6695 rq->elv.priv[1] = NULL; 6696 } 6697 6698 static void bfq_finish_request(struct request *rq) 6699 { 6700 bfq_finish_requeue_request(rq); 6701 6702 if (rq->elv.icq) { 6703 put_io_context(rq->elv.icq->ioc); 6704 rq->elv.icq = NULL; 6705 } 6706 } 6707 6708 /* 6709 * Removes the association between the current task and bfqq, assuming 6710 * that bic points to the bfq iocontext of the task. 6711 * Returns NULL if a new bfqq should be allocated, or the old bfqq if this 6712 * was the last process referring to that bfqq. 6713 */ 6714 static struct bfq_queue * 6715 bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq) 6716 { 6717 bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue"); 6718 6719 if (bfqq_process_refs(bfqq) == 1) { 6720 bfqq->pid = current->pid; 6721 bfq_clear_bfqq_coop(bfqq); 6722 bfq_clear_bfqq_split_coop(bfqq); 6723 return bfqq; 6724 } 6725 6726 bic_set_bfqq(bic, NULL, true, bfqq->actuator_idx); 6727 6728 bfq_put_cooperator(bfqq); 6729 6730 bfq_release_process_ref(bfqq->bfqd, bfqq); 6731 return NULL; 6732 } 6733 6734 static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd, 6735 struct bfq_io_cq *bic, 6736 struct bio *bio, 6737 bool split, bool is_sync, 6738 bool *new_queue) 6739 { 6740 unsigned int act_idx = bfq_actuator_index(bfqd, bio); 6741 struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx); 6742 struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[act_idx]; 6743 6744 if (likely(bfqq && bfqq != &bfqd->oom_bfqq)) 6745 return bfqq; 6746 6747 if (new_queue) 6748 *new_queue = true; 6749 6750 if (bfqq) 6751 bfq_put_queue(bfqq); 6752 bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, split); 6753 6754 bic_set_bfqq(bic, bfqq, is_sync, act_idx); 6755 if (split && is_sync) { 6756 if ((bfqq_data->was_in_burst_list && bfqd->large_burst) || 6757 bfqq_data->saved_in_large_burst) 6758 bfq_mark_bfqq_in_large_burst(bfqq); 6759 else { 6760 bfq_clear_bfqq_in_large_burst(bfqq); 6761 if (bfqq_data->was_in_burst_list) 6762 /* 6763 * If bfqq was in the current 6764 * burst list before being 6765 * merged, then we have to add 6766 * it back. And we do not need 6767 * to increase burst_size, as 6768 * we did not decrement 6769 * burst_size when we removed 6770 * bfqq from the burst list as 6771 * a consequence of a merge 6772 * (see comments in 6773 * bfq_put_queue). In this 6774 * respect, it would be rather 6775 * costly to know whether the 6776 * current burst list is still 6777 * the same burst list from 6778 * which bfqq was removed on 6779 * the merge. To avoid this 6780 * cost, if bfqq was in a 6781 * burst list, then we add 6782 * bfqq to the current burst 6783 * list without any further 6784 * check. This can cause 6785 * inappropriate insertions, 6786 * but rarely enough to not 6787 * harm the detection of large 6788 * bursts significantly. 6789 */ 6790 hlist_add_head(&bfqq->burst_list_node, 6791 &bfqd->burst_list); 6792 } 6793 bfqq->split_time = jiffies; 6794 } 6795 6796 return bfqq; 6797 } 6798 6799 /* 6800 * Only reset private fields. The actual request preparation will be 6801 * performed by bfq_init_rq, when rq is either inserted or merged. See 6802 * comments on bfq_init_rq for the reason behind this delayed 6803 * preparation. 6804 */ 6805 static void bfq_prepare_request(struct request *rq) 6806 { 6807 rq->elv.icq = ioc_find_get_icq(rq->q); 6808 6809 /* 6810 * Regardless of whether we have an icq attached, we have to 6811 * clear the scheduler pointers, as they might point to 6812 * previously allocated bic/bfqq structs. 6813 */ 6814 rq->elv.priv[0] = rq->elv.priv[1] = NULL; 6815 } 6816 6817 /* 6818 * If needed, init rq, allocate bfq data structures associated with 6819 * rq, and increment reference counters in the destination bfq_queue 6820 * for rq. Return the destination bfq_queue for rq, or NULL is rq is 6821 * not associated with any bfq_queue. 6822 * 6823 * This function is invoked by the functions that perform rq insertion 6824 * or merging. One may have expected the above preparation operations 6825 * to be performed in bfq_prepare_request, and not delayed to when rq 6826 * is inserted or merged. The rationale behind this delayed 6827 * preparation is that, after the prepare_request hook is invoked for 6828 * rq, rq may still be transformed into a request with no icq, i.e., a 6829 * request not associated with any queue. No bfq hook is invoked to 6830 * signal this transformation. As a consequence, should these 6831 * preparation operations be performed when the prepare_request hook 6832 * is invoked, and should rq be transformed one moment later, bfq 6833 * would end up in an inconsistent state, because it would have 6834 * incremented some queue counters for an rq destined to 6835 * transformation, without any chance to correctly lower these 6836 * counters back. In contrast, no transformation can still happen for 6837 * rq after rq has been inserted or merged. So, it is safe to execute 6838 * these preparation operations when rq is finally inserted or merged. 6839 */ 6840 static struct bfq_queue *bfq_init_rq(struct request *rq) 6841 { 6842 struct request_queue *q = rq->q; 6843 struct bio *bio = rq->bio; 6844 struct bfq_data *bfqd = q->elevator->elevator_data; 6845 struct bfq_io_cq *bic; 6846 const int is_sync = rq_is_sync(rq); 6847 struct bfq_queue *bfqq; 6848 bool new_queue = false; 6849 bool bfqq_already_existing = false, split = false; 6850 unsigned int a_idx = bfq_actuator_index(bfqd, bio); 6851 6852 if (unlikely(!rq->elv.icq)) 6853 return NULL; 6854 6855 /* 6856 * Assuming that RQ_BFQQ(rq) is set only if everything is set 6857 * for this rq. This holds true, because this function is 6858 * invoked only for insertion or merging, and, after such 6859 * events, a request cannot be manipulated any longer before 6860 * being removed from bfq. 6861 */ 6862 if (RQ_BFQQ(rq)) 6863 return RQ_BFQQ(rq); 6864 6865 bic = icq_to_bic(rq->elv.icq); 6866 6867 bfq_check_ioprio_change(bic, bio); 6868 6869 bfq_bic_update_cgroup(bic, bio); 6870 6871 bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, false, is_sync, 6872 &new_queue); 6873 6874 if (likely(!new_queue)) { 6875 /* If the queue was seeky for too long, break it apart. */ 6876 if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq) && 6877 !bic->bfqq_data[a_idx].stably_merged) { 6878 struct bfq_queue *old_bfqq = bfqq; 6879 6880 /* Update bic before losing reference to bfqq */ 6881 if (bfq_bfqq_in_large_burst(bfqq)) 6882 bic->bfqq_data[a_idx].saved_in_large_burst = 6883 true; 6884 6885 bfqq = bfq_split_bfqq(bic, bfqq); 6886 split = true; 6887 6888 if (!bfqq) { 6889 bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, 6890 true, is_sync, 6891 NULL); 6892 if (unlikely(bfqq == &bfqd->oom_bfqq)) 6893 bfqq_already_existing = true; 6894 } else 6895 bfqq_already_existing = true; 6896 6897 if (!bfqq_already_existing) { 6898 bfqq->waker_bfqq = old_bfqq->waker_bfqq; 6899 bfqq->tentative_waker_bfqq = NULL; 6900 6901 /* 6902 * If the waker queue disappears, then 6903 * new_bfqq->waker_bfqq must be 6904 * reset. So insert new_bfqq into the 6905 * woken_list of the waker. See 6906 * bfq_check_waker for details. 6907 */ 6908 if (bfqq->waker_bfqq) 6909 hlist_add_head(&bfqq->woken_list_node, 6910 &bfqq->waker_bfqq->woken_list); 6911 } 6912 } 6913 } 6914 6915 bfqq_request_allocated(bfqq); 6916 bfqq->ref++; 6917 bic->requests++; 6918 bfq_log_bfqq(bfqd, bfqq, "get_request %p: bfqq %p, %d", 6919 rq, bfqq, bfqq->ref); 6920 6921 rq->elv.priv[0] = bic; 6922 rq->elv.priv[1] = bfqq; 6923 6924 /* 6925 * If a bfq_queue has only one process reference, it is owned 6926 * by only this bic: we can then set bfqq->bic = bic. in 6927 * addition, if the queue has also just been split, we have to 6928 * resume its state. 6929 */ 6930 if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) { 6931 bfqq->bic = bic; 6932 if (split) { 6933 /* 6934 * The queue has just been split from a shared 6935 * queue: restore the idle window and the 6936 * possible weight raising period. 6937 */ 6938 bfq_bfqq_resume_state(bfqq, bfqd, bic, 6939 bfqq_already_existing); 6940 } 6941 } 6942 6943 /* 6944 * Consider bfqq as possibly belonging to a burst of newly 6945 * created queues only if: 6946 * 1) A burst is actually happening (bfqd->burst_size > 0) 6947 * or 6948 * 2) There is no other active queue. In fact, if, in 6949 * contrast, there are active queues not belonging to the 6950 * possible burst bfqq may belong to, then there is no gain 6951 * in considering bfqq as belonging to a burst, and 6952 * therefore in not weight-raising bfqq. See comments on 6953 * bfq_handle_burst(). 6954 * 6955 * This filtering also helps eliminating false positives, 6956 * occurring when bfqq does not belong to an actual large 6957 * burst, but some background task (e.g., a service) happens 6958 * to trigger the creation of new queues very close to when 6959 * bfqq and its possible companion queues are created. See 6960 * comments on bfq_handle_burst() for further details also on 6961 * this issue. 6962 */ 6963 if (unlikely(bfq_bfqq_just_created(bfqq) && 6964 (bfqd->burst_size > 0 || 6965 bfq_tot_busy_queues(bfqd) == 0))) 6966 bfq_handle_burst(bfqd, bfqq); 6967 6968 return bfqq; 6969 } 6970 6971 static void 6972 bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq) 6973 { 6974 enum bfqq_expiration reason; 6975 unsigned long flags; 6976 6977 spin_lock_irqsave(&bfqd->lock, flags); 6978 6979 /* 6980 * Considering that bfqq may be in race, we should firstly check 6981 * whether bfqq is in service before doing something on it. If 6982 * the bfqq in race is not in service, it has already been expired 6983 * through __bfq_bfqq_expire func and its wait_request flags has 6984 * been cleared in __bfq_bfqd_reset_in_service func. 6985 */ 6986 if (bfqq != bfqd->in_service_queue) { 6987 spin_unlock_irqrestore(&bfqd->lock, flags); 6988 return; 6989 } 6990 6991 bfq_clear_bfqq_wait_request(bfqq); 6992 6993 if (bfq_bfqq_budget_timeout(bfqq)) 6994 /* 6995 * Also here the queue can be safely expired 6996 * for budget timeout without wasting 6997 * guarantees 6998 */ 6999 reason = BFQQE_BUDGET_TIMEOUT; 7000 else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0) 7001 /* 7002 * The queue may not be empty upon timer expiration, 7003 * because we may not disable the timer when the 7004 * first request of the in-service queue arrives 7005 * during disk idling. 7006 */ 7007 reason = BFQQE_TOO_IDLE; 7008 else 7009 goto schedule_dispatch; 7010 7011 bfq_bfqq_expire(bfqd, bfqq, true, reason); 7012 7013 schedule_dispatch: 7014 bfq_schedule_dispatch(bfqd); 7015 spin_unlock_irqrestore(&bfqd->lock, flags); 7016 } 7017 7018 /* 7019 * Handler of the expiration of the timer running if the in-service queue 7020 * is idling inside its time slice. 7021 */ 7022 static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer) 7023 { 7024 struct bfq_data *bfqd = container_of(timer, struct bfq_data, 7025 idle_slice_timer); 7026 struct bfq_queue *bfqq = bfqd->in_service_queue; 7027 7028 /* 7029 * Theoretical race here: the in-service queue can be NULL or 7030 * different from the queue that was idling if a new request 7031 * arrives for the current queue and there is a full dispatch 7032 * cycle that changes the in-service queue. This can hardly 7033 * happen, but in the worst case we just expire a queue too 7034 * early. 7035 */ 7036 if (bfqq) 7037 bfq_idle_slice_timer_body(bfqd, bfqq); 7038 7039 return HRTIMER_NORESTART; 7040 } 7041 7042 static void __bfq_put_async_bfqq(struct bfq_data *bfqd, 7043 struct bfq_queue **bfqq_ptr) 7044 { 7045 struct bfq_queue *bfqq = *bfqq_ptr; 7046 7047 bfq_log(bfqd, "put_async_bfqq: %p", bfqq); 7048 if (bfqq) { 7049 bfq_bfqq_move(bfqd, bfqq, bfqd->root_group); 7050 7051 bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d", 7052 bfqq, bfqq->ref); 7053 bfq_put_queue(bfqq); 7054 *bfqq_ptr = NULL; 7055 } 7056 } 7057 7058 /* 7059 * Release all the bfqg references to its async queues. If we are 7060 * deallocating the group these queues may still contain requests, so 7061 * we reparent them to the root cgroup (i.e., the only one that will 7062 * exist for sure until all the requests on a device are gone). 7063 */ 7064 void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg) 7065 { 7066 int i, j, k; 7067 7068 for (k = 0; k < bfqd->num_actuators; k++) { 7069 for (i = 0; i < 2; i++) 7070 for (j = 0; j < IOPRIO_NR_LEVELS; j++) 7071 __bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j][k]); 7072 7073 __bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq[k]); 7074 } 7075 } 7076 7077 /* 7078 * See the comments on bfq_limit_depth for the purpose of 7079 * the depths set in the function. Return minimum shallow depth we'll use. 7080 */ 7081 static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt) 7082 { 7083 unsigned int depth = 1U << bt->sb.shift; 7084 7085 bfqd->full_depth_shift = bt->sb.shift; 7086 /* 7087 * In-word depths if no bfq_queue is being weight-raised: 7088 * leaving 25% of tags only for sync reads. 7089 * 7090 * In next formulas, right-shift the value 7091 * (1U<<bt->sb.shift), instead of computing directly 7092 * (1U<<(bt->sb.shift - something)), to be robust against 7093 * any possible value of bt->sb.shift, without having to 7094 * limit 'something'. 7095 */ 7096 /* no more than 50% of tags for async I/O */ 7097 bfqd->word_depths[0][0] = max(depth >> 1, 1U); 7098 /* 7099 * no more than 75% of tags for sync writes (25% extra tags 7100 * w.r.t. async I/O, to prevent async I/O from starving sync 7101 * writes) 7102 */ 7103 bfqd->word_depths[0][1] = max((depth * 3) >> 2, 1U); 7104 7105 /* 7106 * In-word depths in case some bfq_queue is being weight- 7107 * raised: leaving ~63% of tags for sync reads. This is the 7108 * highest percentage for which, in our tests, application 7109 * start-up times didn't suffer from any regression due to tag 7110 * shortage. 7111 */ 7112 /* no more than ~18% of tags for async I/O */ 7113 bfqd->word_depths[1][0] = max((depth * 3) >> 4, 1U); 7114 /* no more than ~37% of tags for sync writes (~20% extra tags) */ 7115 bfqd->word_depths[1][1] = max((depth * 6) >> 4, 1U); 7116 } 7117 7118 static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx) 7119 { 7120 struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; 7121 struct blk_mq_tags *tags = hctx->sched_tags; 7122 7123 bfq_update_depths(bfqd, &tags->bitmap_tags); 7124 sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1); 7125 } 7126 7127 static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index) 7128 { 7129 bfq_depth_updated(hctx); 7130 return 0; 7131 } 7132 7133 static void bfq_exit_queue(struct elevator_queue *e) 7134 { 7135 struct bfq_data *bfqd = e->elevator_data; 7136 struct bfq_queue *bfqq, *n; 7137 7138 hrtimer_cancel(&bfqd->idle_slice_timer); 7139 7140 spin_lock_irq(&bfqd->lock); 7141 list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list) 7142 bfq_deactivate_bfqq(bfqd, bfqq, false, false); 7143 spin_unlock_irq(&bfqd->lock); 7144 7145 hrtimer_cancel(&bfqd->idle_slice_timer); 7146 7147 /* release oom-queue reference to root group */ 7148 bfqg_and_blkg_put(bfqd->root_group); 7149 7150 #ifdef CONFIG_BFQ_GROUP_IOSCHED 7151 blkcg_deactivate_policy(bfqd->queue->disk, &blkcg_policy_bfq); 7152 #else 7153 spin_lock_irq(&bfqd->lock); 7154 bfq_put_async_queues(bfqd, bfqd->root_group); 7155 kfree(bfqd->root_group); 7156 spin_unlock_irq(&bfqd->lock); 7157 #endif 7158 7159 blk_stat_disable_accounting(bfqd->queue); 7160 clear_bit(ELEVATOR_FLAG_DISABLE_WBT, &e->flags); 7161 wbt_enable_default(bfqd->queue->disk); 7162 7163 kfree(bfqd); 7164 } 7165 7166 static void bfq_init_root_group(struct bfq_group *root_group, 7167 struct bfq_data *bfqd) 7168 { 7169 int i; 7170 7171 #ifdef CONFIG_BFQ_GROUP_IOSCHED 7172 root_group->entity.parent = NULL; 7173 root_group->my_entity = NULL; 7174 root_group->bfqd = bfqd; 7175 #endif 7176 root_group->rq_pos_tree = RB_ROOT; 7177 for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) 7178 root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT; 7179 root_group->sched_data.bfq_class_idle_last_service = jiffies; 7180 } 7181 7182 static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) 7183 { 7184 struct bfq_data *bfqd; 7185 struct elevator_queue *eq; 7186 unsigned int i; 7187 struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges; 7188 7189 eq = elevator_alloc(q, e); 7190 if (!eq) 7191 return -ENOMEM; 7192 7193 bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node); 7194 if (!bfqd) { 7195 kobject_put(&eq->kobj); 7196 return -ENOMEM; 7197 } 7198 eq->elevator_data = bfqd; 7199 7200 spin_lock_irq(&q->queue_lock); 7201 q->elevator = eq; 7202 spin_unlock_irq(&q->queue_lock); 7203 7204 /* 7205 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues. 7206 * Grab a permanent reference to it, so that the normal code flow 7207 * will not attempt to free it. 7208 * Set zero as actuator index: we will pretend that 7209 * all I/O requests are for the same actuator. 7210 */ 7211 bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0, 0); 7212 bfqd->oom_bfqq.ref++; 7213 bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO; 7214 bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE; 7215 bfqd->oom_bfqq.entity.new_weight = 7216 bfq_ioprio_to_weight(bfqd->oom_bfqq.new_ioprio); 7217 7218 /* oom_bfqq does not participate to bursts */ 7219 bfq_clear_bfqq_just_created(&bfqd->oom_bfqq); 7220 7221 /* 7222 * Trigger weight initialization, according to ioprio, at the 7223 * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio 7224 * class won't be changed any more. 7225 */ 7226 bfqd->oom_bfqq.entity.prio_changed = 1; 7227 7228 bfqd->queue = q; 7229 7230 bfqd->num_actuators = 1; 7231 /* 7232 * If the disk supports multiple actuators, copy independent 7233 * access ranges from the request queue structure. 7234 */ 7235 spin_lock_irq(&q->queue_lock); 7236 if (ia_ranges) { 7237 /* 7238 * Check if the disk ia_ranges size exceeds the current bfq 7239 * actuator limit. 7240 */ 7241 if (ia_ranges->nr_ia_ranges > BFQ_MAX_ACTUATORS) { 7242 pr_crit("nr_ia_ranges higher than act limit: iars=%d, max=%d.\n", 7243 ia_ranges->nr_ia_ranges, BFQ_MAX_ACTUATORS); 7244 pr_crit("Falling back to single actuator mode.\n"); 7245 } else { 7246 bfqd->num_actuators = ia_ranges->nr_ia_ranges; 7247 7248 for (i = 0; i < bfqd->num_actuators; i++) { 7249 bfqd->sector[i] = ia_ranges->ia_range[i].sector; 7250 bfqd->nr_sectors[i] = 7251 ia_ranges->ia_range[i].nr_sectors; 7252 } 7253 } 7254 } 7255 7256 /* Otherwise use single-actuator dev info */ 7257 if (bfqd->num_actuators == 1) { 7258 bfqd->sector[0] = 0; 7259 bfqd->nr_sectors[0] = get_capacity(q->disk); 7260 } 7261 spin_unlock_irq(&q->queue_lock); 7262 7263 INIT_LIST_HEAD(&bfqd->dispatch); 7264 7265 hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC, 7266 HRTIMER_MODE_REL); 7267 bfqd->idle_slice_timer.function = bfq_idle_slice_timer; 7268 7269 bfqd->queue_weights_tree = RB_ROOT_CACHED; 7270 #ifdef CONFIG_BFQ_GROUP_IOSCHED 7271 bfqd->num_groups_with_pending_reqs = 0; 7272 #endif 7273 7274 INIT_LIST_HEAD(&bfqd->active_list[0]); 7275 INIT_LIST_HEAD(&bfqd->active_list[1]); 7276 INIT_LIST_HEAD(&bfqd->idle_list); 7277 INIT_HLIST_HEAD(&bfqd->burst_list); 7278 7279 bfqd->hw_tag = -1; 7280 bfqd->nonrot_with_queueing = blk_queue_nonrot(bfqd->queue); 7281 7282 bfqd->bfq_max_budget = bfq_default_max_budget; 7283 7284 bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0]; 7285 bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1]; 7286 bfqd->bfq_back_max = bfq_back_max; 7287 bfqd->bfq_back_penalty = bfq_back_penalty; 7288 bfqd->bfq_slice_idle = bfq_slice_idle; 7289 bfqd->bfq_timeout = bfq_timeout; 7290 7291 bfqd->bfq_large_burst_thresh = 8; 7292 bfqd->bfq_burst_interval = msecs_to_jiffies(180); 7293 7294 bfqd->low_latency = true; 7295 7296 /* 7297 * Trade-off between responsiveness and fairness. 7298 */ 7299 bfqd->bfq_wr_coeff = 30; 7300 bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300); 7301 bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000); 7302 bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500); 7303 bfqd->bfq_wr_max_softrt_rate = 7000; /* 7304 * Approximate rate required 7305 * to playback or record a 7306 * high-definition compressed 7307 * video. 7308 */ 7309 bfqd->wr_busy_queues = 0; 7310 7311 /* 7312 * Begin by assuming, optimistically, that the device peak 7313 * rate is equal to 2/3 of the highest reference rate. 7314 */ 7315 bfqd->rate_dur_prod = ref_rate[blk_queue_nonrot(bfqd->queue)] * 7316 ref_wr_duration[blk_queue_nonrot(bfqd->queue)]; 7317 bfqd->peak_rate = ref_rate[blk_queue_nonrot(bfqd->queue)] * 2 / 3; 7318 7319 /* see comments on the definition of next field inside bfq_data */ 7320 bfqd->actuator_load_threshold = 4; 7321 7322 spin_lock_init(&bfqd->lock); 7323 7324 /* 7325 * The invocation of the next bfq_create_group_hierarchy 7326 * function is the head of a chain of function calls 7327 * (bfq_create_group_hierarchy->blkcg_activate_policy-> 7328 * blk_mq_freeze_queue) that may lead to the invocation of the 7329 * has_work hook function. For this reason, 7330 * bfq_create_group_hierarchy is invoked only after all 7331 * scheduler data has been initialized, apart from the fields 7332 * that can be initialized only after invoking 7333 * bfq_create_group_hierarchy. This, in particular, enables 7334 * has_work to correctly return false. Of course, to avoid 7335 * other inconsistencies, the blk-mq stack must then refrain 7336 * from invoking further scheduler hooks before this init 7337 * function is finished. 7338 */ 7339 bfqd->root_group = bfq_create_group_hierarchy(bfqd, q->node); 7340 if (!bfqd->root_group) 7341 goto out_free; 7342 bfq_init_root_group(bfqd->root_group, bfqd); 7343 bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group); 7344 7345 /* We dispatch from request queue wide instead of hw queue */ 7346 blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q); 7347 7348 set_bit(ELEVATOR_FLAG_DISABLE_WBT, &eq->flags); 7349 wbt_disable_default(q->disk); 7350 blk_stat_enable_accounting(q); 7351 7352 return 0; 7353 7354 out_free: 7355 kfree(bfqd); 7356 kobject_put(&eq->kobj); 7357 return -ENOMEM; 7358 } 7359 7360 static void bfq_slab_kill(void) 7361 { 7362 kmem_cache_destroy(bfq_pool); 7363 } 7364 7365 static int __init bfq_slab_setup(void) 7366 { 7367 bfq_pool = KMEM_CACHE(bfq_queue, 0); 7368 if (!bfq_pool) 7369 return -ENOMEM; 7370 return 0; 7371 } 7372 7373 static ssize_t bfq_var_show(unsigned int var, char *page) 7374 { 7375 return sprintf(page, "%u\n", var); 7376 } 7377 7378 static int bfq_var_store(unsigned long *var, const char *page) 7379 { 7380 unsigned long new_val; 7381 int ret = kstrtoul(page, 10, &new_val); 7382 7383 if (ret) 7384 return ret; 7385 *var = new_val; 7386 return 0; 7387 } 7388 7389 #define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \ 7390 static ssize_t __FUNC(struct elevator_queue *e, char *page) \ 7391 { \ 7392 struct bfq_data *bfqd = e->elevator_data; \ 7393 u64 __data = __VAR; \ 7394 if (__CONV == 1) \ 7395 __data = jiffies_to_msecs(__data); \ 7396 else if (__CONV == 2) \ 7397 __data = div_u64(__data, NSEC_PER_MSEC); \ 7398 return bfq_var_show(__data, (page)); \ 7399 } 7400 SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 2); 7401 SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 2); 7402 SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0); 7403 SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0); 7404 SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 2); 7405 SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0); 7406 SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout, 1); 7407 SHOW_FUNCTION(bfq_strict_guarantees_show, bfqd->strict_guarantees, 0); 7408 SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0); 7409 #undef SHOW_FUNCTION 7410 7411 #define USEC_SHOW_FUNCTION(__FUNC, __VAR) \ 7412 static ssize_t __FUNC(struct elevator_queue *e, char *page) \ 7413 { \ 7414 struct bfq_data *bfqd = e->elevator_data; \ 7415 u64 __data = __VAR; \ 7416 __data = div_u64(__data, NSEC_PER_USEC); \ 7417 return bfq_var_show(__data, (page)); \ 7418 } 7419 USEC_SHOW_FUNCTION(bfq_slice_idle_us_show, bfqd->bfq_slice_idle); 7420 #undef USEC_SHOW_FUNCTION 7421 7422 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ 7423 static ssize_t \ 7424 __FUNC(struct elevator_queue *e, const char *page, size_t count) \ 7425 { \ 7426 struct bfq_data *bfqd = e->elevator_data; \ 7427 unsigned long __data, __min = (MIN), __max = (MAX); \ 7428 int ret; \ 7429 \ 7430 ret = bfq_var_store(&__data, (page)); \ 7431 if (ret) \ 7432 return ret; \ 7433 if (__data < __min) \ 7434 __data = __min; \ 7435 else if (__data > __max) \ 7436 __data = __max; \ 7437 if (__CONV == 1) \ 7438 *(__PTR) = msecs_to_jiffies(__data); \ 7439 else if (__CONV == 2) \ 7440 *(__PTR) = (u64)__data * NSEC_PER_MSEC; \ 7441 else \ 7442 *(__PTR) = __data; \ 7443 return count; \ 7444 } 7445 STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1, 7446 INT_MAX, 2); 7447 STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1, 7448 INT_MAX, 2); 7449 STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0); 7450 STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1, 7451 INT_MAX, 0); 7452 STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 2); 7453 #undef STORE_FUNCTION 7454 7455 #define USEC_STORE_FUNCTION(__FUNC, __PTR, MIN, MAX) \ 7456 static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\ 7457 { \ 7458 struct bfq_data *bfqd = e->elevator_data; \ 7459 unsigned long __data, __min = (MIN), __max = (MAX); \ 7460 int ret; \ 7461 \ 7462 ret = bfq_var_store(&__data, (page)); \ 7463 if (ret) \ 7464 return ret; \ 7465 if (__data < __min) \ 7466 __data = __min; \ 7467 else if (__data > __max) \ 7468 __data = __max; \ 7469 *(__PTR) = (u64)__data * NSEC_PER_USEC; \ 7470 return count; \ 7471 } 7472 USEC_STORE_FUNCTION(bfq_slice_idle_us_store, &bfqd->bfq_slice_idle, 0, 7473 UINT_MAX); 7474 #undef USEC_STORE_FUNCTION 7475 7476 static ssize_t bfq_max_budget_store(struct elevator_queue *e, 7477 const char *page, size_t count) 7478 { 7479 struct bfq_data *bfqd = e->elevator_data; 7480 unsigned long __data; 7481 int ret; 7482 7483 ret = bfq_var_store(&__data, (page)); 7484 if (ret) 7485 return ret; 7486 7487 if (__data == 0) 7488 bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd); 7489 else { 7490 if (__data > INT_MAX) 7491 __data = INT_MAX; 7492 bfqd->bfq_max_budget = __data; 7493 } 7494 7495 bfqd->bfq_user_max_budget = __data; 7496 7497 return count; 7498 } 7499 7500 /* 7501 * Leaving this name to preserve name compatibility with cfq 7502 * parameters, but this timeout is used for both sync and async. 7503 */ 7504 static ssize_t bfq_timeout_sync_store(struct elevator_queue *e, 7505 const char *page, size_t count) 7506 { 7507 struct bfq_data *bfqd = e->elevator_data; 7508 unsigned long __data; 7509 int ret; 7510 7511 ret = bfq_var_store(&__data, (page)); 7512 if (ret) 7513 return ret; 7514 7515 if (__data < 1) 7516 __data = 1; 7517 else if (__data > INT_MAX) 7518 __data = INT_MAX; 7519 7520 bfqd->bfq_timeout = msecs_to_jiffies(__data); 7521 if (bfqd->bfq_user_max_budget == 0) 7522 bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd); 7523 7524 return count; 7525 } 7526 7527 static ssize_t bfq_strict_guarantees_store(struct elevator_queue *e, 7528 const char *page, size_t count) 7529 { 7530 struct bfq_data *bfqd = e->elevator_data; 7531 unsigned long __data; 7532 int ret; 7533 7534 ret = bfq_var_store(&__data, (page)); 7535 if (ret) 7536 return ret; 7537 7538 if (__data > 1) 7539 __data = 1; 7540 if (!bfqd->strict_guarantees && __data == 1 7541 && bfqd->bfq_slice_idle < 8 * NSEC_PER_MSEC) 7542 bfqd->bfq_slice_idle = 8 * NSEC_PER_MSEC; 7543 7544 bfqd->strict_guarantees = __data; 7545 7546 return count; 7547 } 7548 7549 static ssize_t bfq_low_latency_store(struct elevator_queue *e, 7550 const char *page, size_t count) 7551 { 7552 struct bfq_data *bfqd = e->elevator_data; 7553 unsigned long __data; 7554 int ret; 7555 7556 ret = bfq_var_store(&__data, (page)); 7557 if (ret) 7558 return ret; 7559 7560 if (__data > 1) 7561 __data = 1; 7562 if (__data == 0 && bfqd->low_latency != 0) 7563 bfq_end_wr(bfqd); 7564 bfqd->low_latency = __data; 7565 7566 return count; 7567 } 7568 7569 #define BFQ_ATTR(name) \ 7570 __ATTR(name, 0644, bfq_##name##_show, bfq_##name##_store) 7571 7572 static struct elv_fs_entry bfq_attrs[] = { 7573 BFQ_ATTR(fifo_expire_sync), 7574 BFQ_ATTR(fifo_expire_async), 7575 BFQ_ATTR(back_seek_max), 7576 BFQ_ATTR(back_seek_penalty), 7577 BFQ_ATTR(slice_idle), 7578 BFQ_ATTR(slice_idle_us), 7579 BFQ_ATTR(max_budget), 7580 BFQ_ATTR(timeout_sync), 7581 BFQ_ATTR(strict_guarantees), 7582 BFQ_ATTR(low_latency), 7583 __ATTR_NULL 7584 }; 7585 7586 static struct elevator_type iosched_bfq_mq = { 7587 .ops = { 7588 .limit_depth = bfq_limit_depth, 7589 .prepare_request = bfq_prepare_request, 7590 .requeue_request = bfq_finish_requeue_request, 7591 .finish_request = bfq_finish_request, 7592 .exit_icq = bfq_exit_icq, 7593 .insert_requests = bfq_insert_requests, 7594 .dispatch_request = bfq_dispatch_request, 7595 .next_request = elv_rb_latter_request, 7596 .former_request = elv_rb_former_request, 7597 .allow_merge = bfq_allow_bio_merge, 7598 .bio_merge = bfq_bio_merge, 7599 .request_merge = bfq_request_merge, 7600 .requests_merged = bfq_requests_merged, 7601 .request_merged = bfq_request_merged, 7602 .has_work = bfq_has_work, 7603 .depth_updated = bfq_depth_updated, 7604 .init_hctx = bfq_init_hctx, 7605 .init_sched = bfq_init_queue, 7606 .exit_sched = bfq_exit_queue, 7607 }, 7608 7609 .icq_size = sizeof(struct bfq_io_cq), 7610 .icq_align = __alignof__(struct bfq_io_cq), 7611 .elevator_attrs = bfq_attrs, 7612 .elevator_name = "bfq", 7613 .elevator_owner = THIS_MODULE, 7614 }; 7615 MODULE_ALIAS("bfq-iosched"); 7616 7617 static int __init bfq_init(void) 7618 { 7619 int ret; 7620 7621 #ifdef CONFIG_BFQ_GROUP_IOSCHED 7622 ret = blkcg_policy_register(&blkcg_policy_bfq); 7623 if (ret) 7624 return ret; 7625 #endif 7626 7627 ret = -ENOMEM; 7628 if (bfq_slab_setup()) 7629 goto err_pol_unreg; 7630 7631 /* 7632 * Times to load large popular applications for the typical 7633 * systems installed on the reference devices (see the 7634 * comments before the definition of the next 7635 * array). Actually, we use slightly lower values, as the 7636 * estimated peak rate tends to be smaller than the actual 7637 * peak rate. The reason for this last fact is that estimates 7638 * are computed over much shorter time intervals than the long 7639 * intervals typically used for benchmarking. Why? First, to 7640 * adapt more quickly to variations. Second, because an I/O 7641 * scheduler cannot rely on a peak-rate-evaluation workload to 7642 * be run for a long time. 7643 */ 7644 ref_wr_duration[0] = msecs_to_jiffies(7000); /* actually 8 sec */ 7645 ref_wr_duration[1] = msecs_to_jiffies(2500); /* actually 3 sec */ 7646 7647 ret = elv_register(&iosched_bfq_mq); 7648 if (ret) 7649 goto slab_kill; 7650 7651 return 0; 7652 7653 slab_kill: 7654 bfq_slab_kill(); 7655 err_pol_unreg: 7656 #ifdef CONFIG_BFQ_GROUP_IOSCHED 7657 blkcg_policy_unregister(&blkcg_policy_bfq); 7658 #endif 7659 return ret; 7660 } 7661 7662 static void __exit bfq_exit(void) 7663 { 7664 elv_unregister(&iosched_bfq_mq); 7665 #ifdef CONFIG_BFQ_GROUP_IOSCHED 7666 blkcg_policy_unregister(&blkcg_policy_bfq); 7667 #endif 7668 bfq_slab_kill(); 7669 } 7670 7671 module_init(bfq_init); 7672 module_exit(bfq_exit); 7673 7674 MODULE_AUTHOR("Paolo Valente"); 7675 MODULE_LICENSE("GPL"); 7676 MODULE_DESCRIPTION("MQ Budget Fair Queueing I/O Scheduler"); 7677