1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright 2009 Sun Microsystems, Inc. All rights reserved. 23 * Use is subject to license terms. 24 */ 25 26 /* 27 * Page Retire - Big Theory Statement. 28 * 29 * This file handles removing sections of faulty memory from use when the 30 * user land FMA Diagnosis Engine requests that a page be removed or when 31 * a CE or UE is detected by the hardware. 32 * 33 * In the bad old days, the kernel side of Page Retire did a lot of the work 34 * on its own. Now, with the DE keeping track of errors, the kernel side is 35 * rather simple minded on most platforms. 36 * 37 * Errors are all reflected to the DE, and after digesting the error and 38 * looking at all previously reported errors, the DE decides what should 39 * be done about the current error. If the DE wants a particular page to 40 * be retired, then the kernel page retire code is invoked via an ioctl. 41 * On non-FMA platforms, the ue_drain and ce_drain paths ends up calling 42 * page retire to handle the error. Since page retire is just a simple 43 * mechanism it doesn't need to differentiate between the different callers. 44 * 45 * The p_toxic field in the page_t is used to indicate which errors have 46 * occurred and what action has been taken on a given page. Because errors are 47 * reported without regard to the locked state of a page, no locks are used 48 * to SET the error bits in p_toxic. However, in order to clear the error 49 * bits, the page_t must be held exclusively locked. 50 * 51 * When page_retire() is called, it must be able to acquire locks, sleep, etc. 52 * It must not be called from high-level interrupt context. 53 * 54 * Depending on how the requested page is being used at the time of the retire 55 * request (and on the availability of sufficient system resources), the page 56 * may be retired immediately, or just marked for retirement later. For 57 * example, locked pages are marked, while free pages are retired. Multiple 58 * requests may be made to retire the same page, although there is no need 59 * to: once the p_toxic flags are set, the page will be retired as soon as it 60 * can be exclusively locked. 61 * 62 * The retire mechanism is driven centrally out of page_unlock(). To expedite 63 * the retirement of pages, further requests for SE_SHARED locks are denied 64 * as long as a page retirement is pending. In addition, as long as pages are 65 * pending retirement a background thread runs periodically trying to retire 66 * those pages. Pages which could not be retired while the system is running 67 * are scrubbed prior to rebooting to avoid latent errors on the next boot. 68 * 69 * UE pages without persistent errors are scrubbed and returned to service. 70 * Recidivist pages, as well as FMA-directed requests for retirement, result 71 * in the page being taken out of service. Once the decision is made to take 72 * a page out of service, the page is cleared, hashed onto the retired_pages 73 * vnode, marked as retired, and it is unlocked. No other requesters (except 74 * for unretire) are allowed to lock retired pages. 75 * 76 * The public routines return (sadly) 0 if they worked and a non-zero error 77 * value if something went wrong. This is done for the ioctl side of the 78 * world to allow errors to be reflected all the way out to user land. The 79 * non-zero values are explained in comments atop each function. 80 */ 81 82 /* 83 * Things to fix: 84 * 85 * 1. Trying to retire non-relocatable kvp pages may result in a 86 * quagmire. This is because seg_kmem() no longer keeps its pages locked, 87 * and calls page_lookup() in the free path; since kvp pages are modified 88 * and don't have a usable backing store, page_retire() can't do anything 89 * with them, and we'll keep denying the lock to seg_kmem_free() in a 90 * vicious cycle. To prevent that, we don't deny locks to kvp pages, and 91 * hence only try to retire a page from page_unlock() in the free path. 92 * Since most kernel pages are indefinitely held anyway, and don't 93 * participate in I/O, this is of little consequence. 94 * 95 * 2. Low memory situations will be interesting. If we don't have 96 * enough memory for page_relocate() to succeed, we won't be able to 97 * retire dirty pages; nobody will be able to push them out to disk 98 * either, since we aggressively deny the page lock. We could change 99 * fsflush so it can recognize this situation, grab the lock, and push 100 * the page out, where we'll catch it in the free path and retire it. 101 * 102 * 3. Beware of places that have code like this in them: 103 * 104 * if (! page_tryupgrade(pp)) { 105 * page_unlock(pp); 106 * while (! page_lock(pp, SE_EXCL, NULL, P_RECLAIM)) { 107 * / *NOTHING* / 108 * } 109 * } 110 * page_free(pp); 111 * 112 * The problem is that pp can change identity right after the 113 * page_unlock() call. In particular, page_retire() can step in 114 * there, change pp's identity, and hash pp onto the retired_vnode. 115 * 116 * Of course, other functions besides page_retire() can have the 117 * same effect. A kmem reader can waltz by, set up a mapping to the 118 * page, and then unlock the page. Page_free() will then go castors 119 * up. So if anybody is doing this, it's already a bug. 120 * 121 * 4. mdboot()'s call into page_retire_mdboot() should probably be 122 * moved lower. Where the call is made now, we can get into trouble 123 * by scrubbing a kernel page that is then accessed later. 124 */ 125 126 #include <sys/types.h> 127 #include <sys/param.h> 128 #include <sys/systm.h> 129 #include <sys/mman.h> 130 #include <sys/vnode.h> 131 #include <sys/vfs_opreg.h> 132 #include <sys/cmn_err.h> 133 #include <sys/ksynch.h> 134 #include <sys/thread.h> 135 #include <sys/disp.h> 136 #include <sys/ontrap.h> 137 #include <sys/vmsystm.h> 138 #include <sys/mem_config.h> 139 #include <sys/atomic.h> 140 #include <sys/callb.h> 141 #include <vm/page.h> 142 #include <vm/vm_dep.h> 143 #include <vm/as.h> 144 #include <vm/hat.h> 145 146 /* 147 * vnode for all pages which are retired from the VM system; 148 */ 149 vnode_t *retired_pages; 150 151 static int page_retire_pp_finish(page_t *, void *, uint_t); 152 153 /* 154 * Make a list of all of the pages that have been marked for retirement 155 * but are not yet retired. At system shutdown, we will scrub all of the 156 * pages in the list in case there are outstanding UEs. Then, we 157 * cross-check this list against the number of pages that are yet to be 158 * retired, and if we find inconsistencies, we scan every page_t in the 159 * whole system looking for any pages that need to be scrubbed for UEs. 160 * The background thread also uses this queue to determine which pages 161 * it should keep trying to retire. 162 */ 163 #ifdef DEBUG 164 #define PR_PENDING_QMAX 32 165 #else /* DEBUG */ 166 #define PR_PENDING_QMAX 256 167 #endif /* DEBUG */ 168 page_t *pr_pending_q[PR_PENDING_QMAX]; 169 kmutex_t pr_q_mutex; 170 171 /* 172 * Page retire global kstats 173 */ 174 struct page_retire_kstat { 175 kstat_named_t pr_retired; 176 kstat_named_t pr_requested; 177 kstat_named_t pr_requested_free; 178 kstat_named_t pr_enqueue_fail; 179 kstat_named_t pr_dequeue_fail; 180 kstat_named_t pr_pending; 181 kstat_named_t pr_pending_kas; 182 kstat_named_t pr_failed; 183 kstat_named_t pr_failed_kernel; 184 kstat_named_t pr_limit; 185 kstat_named_t pr_limit_exceeded; 186 kstat_named_t pr_fma; 187 kstat_named_t pr_mce; 188 kstat_named_t pr_ue; 189 kstat_named_t pr_ue_cleared_retire; 190 kstat_named_t pr_ue_cleared_free; 191 kstat_named_t pr_ue_persistent; 192 kstat_named_t pr_unretired; 193 }; 194 195 static struct page_retire_kstat page_retire_kstat = { 196 { "pages_retired", KSTAT_DATA_UINT64}, 197 { "pages_retire_request", KSTAT_DATA_UINT64}, 198 { "pages_retire_request_free", KSTAT_DATA_UINT64}, 199 { "pages_notenqueued", KSTAT_DATA_UINT64}, 200 { "pages_notdequeued", KSTAT_DATA_UINT64}, 201 { "pages_pending", KSTAT_DATA_UINT64}, 202 { "pages_pending_kas", KSTAT_DATA_UINT64}, 203 { "pages_deferred", KSTAT_DATA_UINT64}, 204 { "pages_deferred_kernel", KSTAT_DATA_UINT64}, 205 { "pages_limit", KSTAT_DATA_UINT64}, 206 { "pages_limit_exceeded", KSTAT_DATA_UINT64}, 207 { "pages_fma", KSTAT_DATA_UINT64}, 208 { "pages_multiple_ce", KSTAT_DATA_UINT64}, 209 { "pages_ue", KSTAT_DATA_UINT64}, 210 { "pages_ue_cleared_retired", KSTAT_DATA_UINT64}, 211 { "pages_ue_cleared_freed", KSTAT_DATA_UINT64}, 212 { "pages_ue_persistent", KSTAT_DATA_UINT64}, 213 { "pages_unretired", KSTAT_DATA_UINT64}, 214 }; 215 216 static kstat_t *page_retire_ksp = NULL; 217 218 #define PR_INCR_KSTAT(stat) \ 219 atomic_add_64(&(page_retire_kstat.stat.value.ui64), 1) 220 #define PR_DECR_KSTAT(stat) \ 221 atomic_add_64(&(page_retire_kstat.stat.value.ui64), -1) 222 223 #define PR_KSTAT_RETIRED_CE (page_retire_kstat.pr_mce.value.ui64) 224 #define PR_KSTAT_RETIRED_FMA (page_retire_kstat.pr_fma.value.ui64) 225 #define PR_KSTAT_RETIRED_NOTUE (PR_KSTAT_RETIRED_CE + PR_KSTAT_RETIRED_FMA) 226 #define PR_KSTAT_PENDING (page_retire_kstat.pr_pending.value.ui64) 227 #define PR_KSTAT_PENDING_KAS (page_retire_kstat.pr_pending_kas.value.ui64) 228 #define PR_KSTAT_EQFAIL (page_retire_kstat.pr_enqueue_fail.value.ui64) 229 #define PR_KSTAT_DQFAIL (page_retire_kstat.pr_dequeue_fail.value.ui64) 230 231 /* 232 * page retire kstats to list all retired pages 233 */ 234 static int pr_list_kstat_update(kstat_t *ksp, int rw); 235 static int pr_list_kstat_snapshot(kstat_t *ksp, void *buf, int rw); 236 kmutex_t pr_list_kstat_mutex; 237 238 /* 239 * Limit the number of multiple CE page retires. 240 * The default is 0.1% of physmem, or 1 in 1000 pages. This is set in 241 * basis points, where 100 basis points equals one percent. 242 */ 243 #define MCE_BPT 10 244 uint64_t max_pages_retired_bps = MCE_BPT; 245 #define PAGE_RETIRE_LIMIT ((physmem * max_pages_retired_bps) / 10000) 246 247 /* 248 * Control over the verbosity of page retirement. 249 * 250 * When set to zero (the default), no messages will be printed. 251 * When set to one, summary messages will be printed. 252 * When set > one, all messages will be printed. 253 * 254 * A value of one will trigger detailed messages for retirement operations, 255 * and is intended as a platform tunable for processors where FMA's DE does 256 * not run (e.g., spitfire). Values > one are intended for debugging only. 257 */ 258 int page_retire_messages = 0; 259 260 /* 261 * Control whether or not we return scrubbed UE pages to service. 262 * By default we do not since FMA wants to run its diagnostics first 263 * and then ask us to unretire the page if it passes. Non-FMA platforms 264 * may set this to zero so we will only retire recidivist pages. It should 265 * not be changed by the user. 266 */ 267 int page_retire_first_ue = 1; 268 269 /* 270 * Master enable for page retire. This prevents a CE or UE early in boot 271 * from trying to retire a page before page_retire_init() has finished 272 * setting things up. This is internal only and is not a tunable! 273 */ 274 static int pr_enable = 0; 275 276 extern struct vnode kvp; 277 278 #ifdef DEBUG 279 struct page_retire_debug { 280 int prd_dup1; 281 int prd_dup2; 282 int prd_qdup; 283 int prd_noaction; 284 int prd_queued; 285 int prd_notqueued; 286 int prd_dequeue; 287 int prd_top; 288 int prd_locked; 289 int prd_reloc; 290 int prd_relocfail; 291 int prd_mod; 292 int prd_mod_late; 293 int prd_kern; 294 int prd_free; 295 int prd_noreclaim; 296 int prd_hashout; 297 int prd_fma; 298 int prd_uescrubbed; 299 int prd_uenotscrubbed; 300 int prd_mce; 301 int prd_prlocked; 302 int prd_prnotlocked; 303 int prd_prretired; 304 int prd_ulocked; 305 int prd_unotretired; 306 int prd_udestroy; 307 int prd_uhashout; 308 int prd_uunretired; 309 int prd_unotlocked; 310 int prd_checkhit; 311 int prd_checkmiss_pend; 312 int prd_checkmiss_noerr; 313 int prd_tctop; 314 int prd_tclocked; 315 int prd_hunt; 316 int prd_dohunt; 317 int prd_earlyhunt; 318 int prd_latehunt; 319 int prd_nofreedemote; 320 int prd_nodemote; 321 int prd_demoted; 322 } pr_debug; 323 324 #define PR_DEBUG(foo) ((pr_debug.foo)++) 325 326 /* 327 * A type histogram. We record the incidence of the various toxic 328 * flag combinations along with the interesting page attributes. The 329 * goal is to get as many combinations as we can while driving all 330 * pr_debug values nonzero (indicating we've exercised all possible 331 * code paths across all possible page types). Not all combinations 332 * will make sense -- e.g. PRT_MOD|PRT_KERNEL. 333 * 334 * pr_type offset bit encoding (when examining with a debugger): 335 * 336 * PRT_NAMED - 0x4 337 * PRT_KERNEL - 0x8 338 * PRT_FREE - 0x10 339 * PRT_MOD - 0x20 340 * PRT_FMA - 0x0 341 * PRT_MCE - 0x40 342 * PRT_UE - 0x80 343 */ 344 345 #define PRT_NAMED 0x01 346 #define PRT_KERNEL 0x02 347 #define PRT_FREE 0x04 348 #define PRT_MOD 0x08 349 #define PRT_FMA 0x00 /* yes, this is not a mistake */ 350 #define PRT_MCE 0x10 351 #define PRT_UE 0x20 352 #define PRT_ALL 0x3F 353 354 int pr_types[PRT_ALL+1]; 355 356 #define PR_TYPES(pp) { \ 357 int whichtype = 0; \ 358 if (pp->p_vnode) \ 359 whichtype |= PRT_NAMED; \ 360 if (PP_ISKAS(pp)) \ 361 whichtype |= PRT_KERNEL; \ 362 if (PP_ISFREE(pp)) \ 363 whichtype |= PRT_FREE; \ 364 if (hat_ismod(pp)) \ 365 whichtype |= PRT_MOD; \ 366 if (pp->p_toxic & PR_UE) \ 367 whichtype |= PRT_UE; \ 368 if (pp->p_toxic & PR_MCE) \ 369 whichtype |= PRT_MCE; \ 370 pr_types[whichtype]++; \ 371 } 372 373 int recl_calls; 374 int recl_mtbf = 3; 375 int reloc_calls; 376 int reloc_mtbf = 7; 377 int pr_calls; 378 int pr_mtbf = 15; 379 380 #define MTBF(v, f) (((++(v)) & (f)) != (f)) 381 382 #else /* DEBUG */ 383 384 #define PR_DEBUG(foo) /* nothing */ 385 #define PR_TYPES(foo) /* nothing */ 386 #define MTBF(v, f) (1) 387 388 #endif /* DEBUG */ 389 390 /* 391 * page_retire_done() - completion processing 392 * 393 * Used by the page_retire code for common completion processing. 394 * It keeps track of how many times a given result has happened, 395 * and writes out an occasional message. 396 * 397 * May be called with a NULL pp (PRD_INVALID_PA case). 398 */ 399 #define PRD_INVALID_KEY -1 400 #define PRD_SUCCESS 0 401 #define PRD_PENDING 1 402 #define PRD_FAILED 2 403 #define PRD_DUPLICATE 3 404 #define PRD_INVALID_PA 4 405 #define PRD_LIMIT 5 406 #define PRD_UE_SCRUBBED 6 407 #define PRD_UNR_SUCCESS 7 408 #define PRD_UNR_CANTLOCK 8 409 #define PRD_UNR_NOT 9 410 411 typedef struct page_retire_op { 412 int pr_key; /* one of the PRD_* defines from above */ 413 int pr_count; /* How many times this has happened */ 414 int pr_retval; /* return value */ 415 int pr_msglvl; /* message level - when to print */ 416 char *pr_message; /* Cryptic message for field service */ 417 } page_retire_op_t; 418 419 static page_retire_op_t page_retire_ops[] = { 420 /* key count retval msglvl message */ 421 {PRD_SUCCESS, 0, 0, 1, 422 "Page 0x%08x.%08x removed from service"}, 423 {PRD_PENDING, 0, EAGAIN, 2, 424 "Page 0x%08x.%08x will be retired on free"}, 425 {PRD_FAILED, 0, EAGAIN, 0, NULL}, 426 {PRD_DUPLICATE, 0, EIO, 2, 427 "Page 0x%08x.%08x already retired or pending"}, 428 {PRD_INVALID_PA, 0, EINVAL, 2, 429 "PA 0x%08x.%08x is not a relocatable page"}, 430 {PRD_LIMIT, 0, 0, 1, 431 "Page 0x%08x.%08x not retired due to limit exceeded"}, 432 {PRD_UE_SCRUBBED, 0, 0, 1, 433 "Previously reported error on page 0x%08x.%08x cleared"}, 434 {PRD_UNR_SUCCESS, 0, 0, 1, 435 "Page 0x%08x.%08x returned to service"}, 436 {PRD_UNR_CANTLOCK, 0, EAGAIN, 2, 437 "Page 0x%08x.%08x could not be unretired"}, 438 {PRD_UNR_NOT, 0, EIO, 2, 439 "Page 0x%08x.%08x is not retired"}, 440 {PRD_INVALID_KEY, 0, 0, 0, NULL} /* MUST BE LAST! */ 441 }; 442 443 /* 444 * print a message if page_retire_messages is true. 445 */ 446 #define PR_MESSAGE(debuglvl, msglvl, msg, pa) \ 447 { \ 448 uint64_t p = (uint64_t)pa; \ 449 if (page_retire_messages >= msglvl && msg != NULL) { \ 450 cmn_err(debuglvl, msg, \ 451 (uint32_t)(p >> 32), (uint32_t)p); \ 452 } \ 453 } 454 455 /* 456 * Note that multiple bits may be set in a single settoxic operation. 457 * May be called without the page locked. 458 */ 459 void 460 page_settoxic(page_t *pp, uchar_t bits) 461 { 462 atomic_or_8(&pp->p_toxic, bits); 463 } 464 465 /* 466 * Note that multiple bits may cleared in a single clrtoxic operation. 467 * Must be called with the page exclusively locked to prevent races which 468 * may attempt to retire a page without any toxic bits set. 469 * Note that the PR_CAPTURE bit can be cleared without the exclusive lock 470 * being held as there is a separate mutex which protects that bit. 471 */ 472 void 473 page_clrtoxic(page_t *pp, uchar_t bits) 474 { 475 ASSERT((bits & PR_CAPTURE) || PAGE_EXCL(pp)); 476 atomic_and_8(&pp->p_toxic, ~bits); 477 } 478 479 /* 480 * Prints any page retire messages to the user, and decides what 481 * error code is appropriate for the condition reported. 482 */ 483 static int 484 page_retire_done(page_t *pp, int code) 485 { 486 page_retire_op_t *prop; 487 uint64_t pa = 0; 488 int i; 489 490 if (pp != NULL) { 491 pa = mmu_ptob((uint64_t)pp->p_pagenum); 492 } 493 494 prop = NULL; 495 for (i = 0; page_retire_ops[i].pr_key != PRD_INVALID_KEY; i++) { 496 if (page_retire_ops[i].pr_key == code) { 497 prop = &page_retire_ops[i]; 498 break; 499 } 500 } 501 502 #ifdef DEBUG 503 if (page_retire_ops[i].pr_key == PRD_INVALID_KEY) { 504 cmn_err(CE_PANIC, "page_retire_done: Invalid opcode %d", code); 505 } 506 #endif 507 508 ASSERT(prop->pr_key == code); 509 510 prop->pr_count++; 511 512 PR_MESSAGE(CE_NOTE, prop->pr_msglvl, prop->pr_message, pa); 513 if (pp != NULL) { 514 page_settoxic(pp, PR_MSG); 515 } 516 517 return (prop->pr_retval); 518 } 519 520 /* 521 * Act like page_destroy(), but instead of freeing the page, hash it onto 522 * the retired_pages vnode, and mark it retired. 523 * 524 * For fun, we try to scrub the page until it's squeaky clean. 525 * availrmem is adjusted here. 526 */ 527 static void 528 page_retire_destroy(page_t *pp) 529 { 530 u_offset_t off = (u_offset_t)((uintptr_t)pp); 531 532 ASSERT(PAGE_EXCL(pp)); 533 ASSERT(!PP_ISFREE(pp)); 534 ASSERT(pp->p_szc == 0); 535 ASSERT(!hat_page_is_mapped(pp)); 536 ASSERT(!pp->p_vnode); 537 538 page_clr_all_props(pp, 0); 539 pagescrub(pp, 0, MMU_PAGESIZE); 540 541 pp->p_next = NULL; 542 pp->p_prev = NULL; 543 if (page_hashin(pp, retired_pages, off, NULL) == 0) { 544 cmn_err(CE_PANIC, "retired page %p hashin failed", (void *)pp); 545 } 546 547 page_settoxic(pp, PR_RETIRED); 548 PR_INCR_KSTAT(pr_retired); 549 550 if (pp->p_toxic & PR_FMA) { 551 PR_INCR_KSTAT(pr_fma); 552 } else if (pp->p_toxic & PR_UE) { 553 PR_INCR_KSTAT(pr_ue); 554 } else { 555 PR_INCR_KSTAT(pr_mce); 556 } 557 558 mutex_enter(&freemem_lock); 559 availrmem--; 560 mutex_exit(&freemem_lock); 561 562 page_unlock(pp); 563 } 564 565 /* 566 * Check whether the number of pages which have been retired already exceeds 567 * the maximum allowable percentage of memory which may be retired. 568 * 569 * Returns 1 if the limit has been exceeded. 570 */ 571 static int 572 page_retire_limit(void) 573 { 574 if (PR_KSTAT_RETIRED_NOTUE >= (uint64_t)PAGE_RETIRE_LIMIT) { 575 PR_INCR_KSTAT(pr_limit_exceeded); 576 return (1); 577 } 578 579 return (0); 580 } 581 582 #define MSG_DM "Data Mismatch occurred at PA 0x%08x.%08x" \ 583 "[ 0x%x != 0x%x ] while attempting to clear previously " \ 584 "reported error; page removed from service" 585 586 #define MSG_UE "Uncorrectable Error occurred at PA 0x%08x.%08x while " \ 587 "attempting to clear previously reported error; page removed " \ 588 "from service" 589 590 /* 591 * Attempt to clear a UE from a page. 592 * Returns 1 if the error has been successfully cleared. 593 */ 594 static int 595 page_clear_transient_ue(page_t *pp) 596 { 597 caddr_t kaddr; 598 uint8_t rb, wb; 599 uint64_t pa; 600 uint32_t pa_hi, pa_lo; 601 on_trap_data_t otd; 602 int errors = 0; 603 int i; 604 605 ASSERT(PAGE_EXCL(pp)); 606 ASSERT(PP_PR_REQ(pp)); 607 ASSERT(pp->p_szc == 0); 608 ASSERT(!hat_page_is_mapped(pp)); 609 610 /* 611 * Clear the page and attempt to clear the UE. If we trap 612 * on the next access to the page, we know the UE has recurred. 613 */ 614 pagescrub(pp, 0, PAGESIZE); 615 616 /* 617 * Map the page and write a bunch of bit patterns to compare 618 * what we wrote with what we read back. This isn't a perfect 619 * test but it should be good enough to catch most of the 620 * recurring UEs. If this fails to catch a recurrent UE, we'll 621 * retire the page the next time we see a UE on the page. 622 */ 623 kaddr = ppmapin(pp, PROT_READ|PROT_WRITE, (caddr_t)-1); 624 625 pa = ptob((uint64_t)page_pptonum(pp)); 626 pa_hi = (uint32_t)(pa >> 32); 627 pa_lo = (uint32_t)pa; 628 629 /* 630 * Disable preemption to prevent the off chance that 631 * we migrate while in the middle of running through 632 * the bit pattern and run on a different processor 633 * than what we started on. 634 */ 635 kpreempt_disable(); 636 637 /* 638 * Fill the page with each (0x00 - 0xFF] bit pattern, flushing 639 * the cache in between reading and writing. We do this under 640 * on_trap() protection to avoid recursion. 641 */ 642 if (on_trap(&otd, OT_DATA_EC)) { 643 PR_MESSAGE(CE_WARN, 1, MSG_UE, pa); 644 errors = 1; 645 } else { 646 for (wb = 0xff; wb > 0; wb--) { 647 for (i = 0; i < PAGESIZE; i++) { 648 kaddr[i] = wb; 649 } 650 651 sync_data_memory(kaddr, PAGESIZE); 652 653 for (i = 0; i < PAGESIZE; i++) { 654 rb = kaddr[i]; 655 if (rb != wb) { 656 /* 657 * We had a mismatch without a trap. 658 * Uh-oh. Something is really wrong 659 * with this system. 660 */ 661 if (page_retire_messages) { 662 cmn_err(CE_WARN, MSG_DM, 663 pa_hi, pa_lo, rb, wb); 664 } 665 errors = 1; 666 goto out; /* double break */ 667 } 668 } 669 } 670 } 671 out: 672 no_trap(); 673 kpreempt_enable(); 674 ppmapout(kaddr); 675 676 return (errors ? 0 : 1); 677 } 678 679 /* 680 * Try to clear a page_t with a single UE. If the UE was transient, it is 681 * returned to service, and we return 1. Otherwise we return 0 meaning 682 * that further processing is required to retire the page. 683 */ 684 static int 685 page_retire_transient_ue(page_t *pp) 686 { 687 ASSERT(PAGE_EXCL(pp)); 688 ASSERT(!hat_page_is_mapped(pp)); 689 690 /* 691 * If this page is a repeat offender, retire him under the 692 * "two strikes and you're out" rule. The caller is responsible 693 * for scrubbing the page to try to clear the error. 694 */ 695 if (pp->p_toxic & PR_UE_SCRUBBED) { 696 PR_INCR_KSTAT(pr_ue_persistent); 697 return (0); 698 } 699 700 if (page_clear_transient_ue(pp)) { 701 /* 702 * We set the PR_SCRUBBED_UE bit; if we ever see this 703 * page again, we will retire it, no questions asked. 704 */ 705 page_settoxic(pp, PR_UE_SCRUBBED); 706 707 if (page_retire_first_ue) { 708 PR_INCR_KSTAT(pr_ue_cleared_retire); 709 return (0); 710 } else { 711 PR_INCR_KSTAT(pr_ue_cleared_free); 712 713 page_clrtoxic(pp, PR_UE | PR_MCE | PR_MSG); 714 715 /* LINTED: CONSTCOND */ 716 VN_DISPOSE(pp, B_FREE, 1, kcred); 717 return (1); 718 } 719 } 720 721 PR_INCR_KSTAT(pr_ue_persistent); 722 return (0); 723 } 724 725 /* 726 * Update the statistics dynamically when our kstat is read. 727 */ 728 static int 729 page_retire_kstat_update(kstat_t *ksp, int rw) 730 { 731 struct page_retire_kstat *pr; 732 733 if (ksp == NULL) 734 return (EINVAL); 735 736 switch (rw) { 737 738 case KSTAT_READ: 739 pr = (struct page_retire_kstat *)ksp->ks_data; 740 ASSERT(pr == &page_retire_kstat); 741 pr->pr_limit.value.ui64 = PAGE_RETIRE_LIMIT; 742 return (0); 743 744 case KSTAT_WRITE: 745 return (EACCES); 746 747 default: 748 return (EINVAL); 749 } 750 /*NOTREACHED*/ 751 } 752 753 static int 754 pr_list_kstat_update(kstat_t *ksp, int rw) 755 { 756 uint_t count; 757 page_t *pp; 758 kmutex_t *vphm; 759 760 if (rw == KSTAT_WRITE) 761 return (EACCES); 762 763 vphm = page_vnode_mutex(retired_pages); 764 mutex_enter(vphm); 765 /* Needs to be under a lock so that for loop will work right */ 766 if (retired_pages->v_pages == NULL) { 767 mutex_exit(vphm); 768 ksp->ks_ndata = 0; 769 ksp->ks_data_size = 0; 770 return (0); 771 } 772 773 count = 1; 774 for (pp = retired_pages->v_pages->p_vpnext; 775 pp != retired_pages->v_pages; pp = pp->p_vpnext) { 776 count++; 777 } 778 mutex_exit(vphm); 779 780 ksp->ks_ndata = count; 781 ksp->ks_data_size = count * 2 * sizeof (uint64_t); 782 783 return (0); 784 } 785 786 /* 787 * all spans will be pagesize and no coalescing will be done with the 788 * list produced. 789 */ 790 static int 791 pr_list_kstat_snapshot(kstat_t *ksp, void *buf, int rw) 792 { 793 kmutex_t *vphm; 794 page_t *pp; 795 struct memunit { 796 uint64_t address; 797 uint64_t size; 798 } *kspmem; 799 800 if (rw == KSTAT_WRITE) 801 return (EACCES); 802 803 ksp->ks_snaptime = gethrtime(); 804 805 kspmem = (struct memunit *)buf; 806 807 vphm = page_vnode_mutex(retired_pages); 808 mutex_enter(vphm); 809 pp = retired_pages->v_pages; 810 if (((caddr_t)kspmem >= (caddr_t)buf + ksp->ks_data_size) || 811 (pp == NULL)) { 812 mutex_exit(vphm); 813 return (0); 814 } 815 kspmem->address = ptob(pp->p_pagenum); 816 kspmem->size = PAGESIZE; 817 kspmem++; 818 for (pp = pp->p_vpnext; pp != retired_pages->v_pages; 819 pp = pp->p_vpnext, kspmem++) { 820 if ((caddr_t)kspmem >= (caddr_t)buf + ksp->ks_data_size) 821 break; 822 kspmem->address = ptob(pp->p_pagenum); 823 kspmem->size = PAGESIZE; 824 } 825 mutex_exit(vphm); 826 827 return (0); 828 } 829 830 /* 831 * page_retire_pend_count -- helper function for page_capture_thread, 832 * returns the number of pages pending retirement. 833 */ 834 uint64_t 835 page_retire_pend_count(void) 836 { 837 return (PR_KSTAT_PENDING); 838 } 839 840 uint64_t 841 page_retire_pend_kas_count(void) 842 { 843 return (PR_KSTAT_PENDING_KAS); 844 } 845 846 void 847 page_retire_incr_pend_count(void *datap) 848 { 849 PR_INCR_KSTAT(pr_pending); 850 851 if ((datap == &kvp) || (datap == &zvp)) { 852 PR_INCR_KSTAT(pr_pending_kas); 853 } 854 } 855 856 void 857 page_retire_decr_pend_count(void *datap) 858 { 859 PR_DECR_KSTAT(pr_pending); 860 861 if ((datap == &kvp) || (datap == &zvp)) { 862 PR_DECR_KSTAT(pr_pending_kas); 863 } 864 } 865 866 /* 867 * Initialize the page retire mechanism: 868 * 869 * - Establish the correctable error retire limit. 870 * - Initialize locks. 871 * - Build the retired_pages vnode. 872 * - Set up the kstats. 873 * - Fire off the background thread. 874 * - Tell page_retire() it's OK to start retiring pages. 875 */ 876 void 877 page_retire_init(void) 878 { 879 const fs_operation_def_t retired_vnodeops_template[] = { 880 { NULL, NULL } 881 }; 882 struct vnodeops *vops; 883 kstat_t *ksp; 884 885 const uint_t page_retire_ndata = 886 sizeof (page_retire_kstat) / sizeof (kstat_named_t); 887 888 ASSERT(page_retire_ksp == NULL); 889 890 if (max_pages_retired_bps <= 0) { 891 max_pages_retired_bps = MCE_BPT; 892 } 893 894 mutex_init(&pr_q_mutex, NULL, MUTEX_DEFAULT, NULL); 895 896 retired_pages = vn_alloc(KM_SLEEP); 897 if (vn_make_ops("retired_pages", retired_vnodeops_template, &vops)) { 898 cmn_err(CE_PANIC, 899 "page_retired_init: can't make retired vnodeops"); 900 } 901 vn_setops(retired_pages, vops); 902 903 if ((page_retire_ksp = kstat_create("unix", 0, "page_retire", 904 "misc", KSTAT_TYPE_NAMED, page_retire_ndata, 905 KSTAT_FLAG_VIRTUAL)) == NULL) { 906 cmn_err(CE_WARN, "kstat_create for page_retire failed"); 907 } else { 908 page_retire_ksp->ks_data = (void *)&page_retire_kstat; 909 page_retire_ksp->ks_update = page_retire_kstat_update; 910 kstat_install(page_retire_ksp); 911 } 912 913 mutex_init(&pr_list_kstat_mutex, NULL, MUTEX_DEFAULT, NULL); 914 ksp = kstat_create("unix", 0, "page_retire_list", "misc", 915 KSTAT_TYPE_RAW, 0, KSTAT_FLAG_VAR_SIZE | KSTAT_FLAG_VIRTUAL); 916 if (ksp != NULL) { 917 ksp->ks_update = pr_list_kstat_update; 918 ksp->ks_snapshot = pr_list_kstat_snapshot; 919 ksp->ks_lock = &pr_list_kstat_mutex; 920 kstat_install(ksp); 921 } 922 923 page_capture_register_callback(PC_RETIRE, -1, page_retire_pp_finish); 924 pr_enable = 1; 925 } 926 927 /* 928 * page_retire_hunt() callback for the retire thread. 929 */ 930 static void 931 page_retire_thread_cb(page_t *pp) 932 { 933 PR_DEBUG(prd_tctop); 934 if (!PP_ISKAS(pp) && page_trylock(pp, SE_EXCL)) { 935 PR_DEBUG(prd_tclocked); 936 page_unlock(pp); 937 } 938 } 939 940 /* 941 * Callback used by page_trycapture() to finish off retiring a page. 942 * The page has already been cleaned and we've been given sole access to 943 * it. 944 * Always returns 0 to indicate that callback succeded as the callback never 945 * fails to finish retiring the given page. 946 */ 947 /*ARGSUSED*/ 948 static int 949 page_retire_pp_finish(page_t *pp, void *notused, uint_t flags) 950 { 951 int toxic; 952 953 ASSERT(PAGE_EXCL(pp)); 954 ASSERT(pp->p_iolock_state == 0); 955 ASSERT(pp->p_szc == 0); 956 957 toxic = pp->p_toxic; 958 959 /* 960 * The problem page is locked, demoted, unmapped, not free, 961 * hashed out, and not COW or mlocked (whew!). 962 * 963 * Now we select our ammunition, take it around back, and shoot it. 964 */ 965 if (toxic & PR_UE) { 966 ue_error: 967 if (page_retire_transient_ue(pp)) { 968 PR_DEBUG(prd_uescrubbed); 969 (void) page_retire_done(pp, PRD_UE_SCRUBBED); 970 } else { 971 PR_DEBUG(prd_uenotscrubbed); 972 page_retire_destroy(pp); 973 (void) page_retire_done(pp, PRD_SUCCESS); 974 } 975 return (0); 976 } else if (toxic & PR_FMA) { 977 PR_DEBUG(prd_fma); 978 page_retire_destroy(pp); 979 (void) page_retire_done(pp, PRD_SUCCESS); 980 return (0); 981 } else if (toxic & PR_MCE) { 982 PR_DEBUG(prd_mce); 983 page_retire_destroy(pp); 984 (void) page_retire_done(pp, PRD_SUCCESS); 985 return (0); 986 } 987 988 /* 989 * When page_retire_first_ue is set to zero and a UE occurs which is 990 * transient, it's possible that we clear some flags set by a second 991 * UE error on the page which occurs while the first is currently being 992 * handled and thus we need to handle the case where none of the above 993 * are set. In this instance, PR_UE_SCRUBBED should be set and thus 994 * we should execute the UE code above. 995 */ 996 if (toxic & PR_UE_SCRUBBED) { 997 goto ue_error; 998 } 999 1000 /* 1001 * It's impossible to get here. 1002 */ 1003 panic("bad toxic flags 0x%x in page_retire_pp_finish\n", toxic); 1004 return (0); 1005 } 1006 1007 /* 1008 * page_retire() - the front door in to retire a page. 1009 * 1010 * Ideally, page_retire() would instantly retire the requested page. 1011 * Unfortunately, some pages are locked or otherwise tied up and cannot be 1012 * retired right away. We use the page capture logic to deal with this 1013 * situation as it will continuously try to retire the page in the background 1014 * if the first attempt fails. Success is determined by looking to see whether 1015 * the page has been retired after the page_trycapture() attempt. 1016 * 1017 * Returns: 1018 * 1019 * - 0 on success, 1020 * - EINVAL when the PA is whacko, 1021 * - EIO if the page is already retired or already pending retirement, or 1022 * - EAGAIN if the page could not be _immediately_ retired but is pending. 1023 */ 1024 int 1025 page_retire(uint64_t pa, uchar_t reason) 1026 { 1027 page_t *pp; 1028 1029 ASSERT(reason & PR_REASONS); /* there must be a reason */ 1030 ASSERT(!(reason & ~PR_REASONS)); /* but no other bits */ 1031 1032 pp = page_numtopp_nolock(mmu_btop(pa)); 1033 if (pp == NULL) { 1034 PR_MESSAGE(CE_WARN, 1, "Cannot schedule clearing of error on" 1035 " page 0x%08x.%08x; page is not relocatable memory", pa); 1036 return (page_retire_done(pp, PRD_INVALID_PA)); 1037 } 1038 if (PP_RETIRED(pp)) { 1039 PR_DEBUG(prd_dup1); 1040 return (page_retire_done(pp, PRD_DUPLICATE)); 1041 } 1042 1043 if ((reason & PR_UE) && !PP_TOXIC(pp)) { 1044 PR_MESSAGE(CE_NOTE, 1, "Scheduling clearing of error on" 1045 " page 0x%08x.%08x", pa); 1046 } else if (PP_PR_REQ(pp)) { 1047 PR_DEBUG(prd_dup2); 1048 return (page_retire_done(pp, PRD_DUPLICATE)); 1049 } else { 1050 PR_MESSAGE(CE_NOTE, 1, "Scheduling removal of" 1051 " page 0x%08x.%08x", pa); 1052 } 1053 1054 /* Avoid setting toxic bits in the first place */ 1055 if ((reason & (PR_FMA | PR_MCE)) && !(reason & PR_UE) && 1056 page_retire_limit()) { 1057 return (page_retire_done(pp, PRD_LIMIT)); 1058 } 1059 1060 if (MTBF(pr_calls, pr_mtbf)) { 1061 page_settoxic(pp, reason); 1062 if (page_trycapture(pp, 0, CAPTURE_RETIRE, pp->p_vnode) == 0) { 1063 PR_DEBUG(prd_prlocked); 1064 } else { 1065 PR_DEBUG(prd_prnotlocked); 1066 } 1067 } else { 1068 PR_DEBUG(prd_prnotlocked); 1069 } 1070 1071 if (PP_RETIRED(pp)) { 1072 PR_DEBUG(prd_prretired); 1073 return (0); 1074 } else { 1075 cv_signal(&pc_cv); 1076 PR_INCR_KSTAT(pr_failed); 1077 1078 if (pp->p_toxic & PR_MSG) { 1079 return (page_retire_done(pp, PRD_FAILED)); 1080 } else { 1081 return (page_retire_done(pp, PRD_PENDING)); 1082 } 1083 } 1084 } 1085 1086 /* 1087 * Take a retired page off the retired-pages vnode and clear the toxic flags. 1088 * If "free" is nonzero, lock it and put it back on the freelist. If "free" 1089 * is zero, the caller already holds SE_EXCL lock so we simply unretire it 1090 * and don't do anything else with it. 1091 * 1092 * Any unretire messages are printed from this routine. 1093 * 1094 * Returns 0 if page pp was unretired; else an error code. 1095 * 1096 * If flags is: 1097 * PR_UNR_FREE - lock the page, clear the toxic flags and free it 1098 * to the freelist. 1099 * PR_UNR_TEMP - lock the page, unretire it, leave the toxic 1100 * bits set as is and return it to the caller. 1101 * PR_UNR_CLEAN - page is SE_EXCL locked, unretire it, clear the 1102 * toxic flags and return it to caller as is. 1103 */ 1104 int 1105 page_unretire_pp(page_t *pp, int flags) 1106 { 1107 /* 1108 * To be retired, a page has to be hashed onto the retired_pages vnode 1109 * and have PR_RETIRED set in p_toxic. 1110 */ 1111 if (flags == PR_UNR_CLEAN || 1112 page_try_reclaim_lock(pp, SE_EXCL, SE_RETIRED)) { 1113 ASSERT(PAGE_EXCL(pp)); 1114 PR_DEBUG(prd_ulocked); 1115 if (!PP_RETIRED(pp)) { 1116 PR_DEBUG(prd_unotretired); 1117 page_unlock(pp); 1118 return (page_retire_done(pp, PRD_UNR_NOT)); 1119 } 1120 1121 PR_MESSAGE(CE_NOTE, 1, "unretiring retired" 1122 " page 0x%08x.%08x", mmu_ptob((uint64_t)pp->p_pagenum)); 1123 if (pp->p_toxic & PR_FMA) { 1124 PR_DECR_KSTAT(pr_fma); 1125 } else if (pp->p_toxic & PR_UE) { 1126 PR_DECR_KSTAT(pr_ue); 1127 } else { 1128 PR_DECR_KSTAT(pr_mce); 1129 } 1130 1131 if (flags == PR_UNR_TEMP) 1132 page_clrtoxic(pp, PR_RETIRED); 1133 else 1134 page_clrtoxic(pp, PR_TOXICFLAGS); 1135 1136 if (flags == PR_UNR_FREE) { 1137 PR_DEBUG(prd_udestroy); 1138 page_destroy(pp, 0); 1139 } else { 1140 PR_DEBUG(prd_uhashout); 1141 page_hashout(pp, NULL); 1142 } 1143 1144 mutex_enter(&freemem_lock); 1145 availrmem++; 1146 mutex_exit(&freemem_lock); 1147 1148 PR_DEBUG(prd_uunretired); 1149 PR_DECR_KSTAT(pr_retired); 1150 PR_INCR_KSTAT(pr_unretired); 1151 return (page_retire_done(pp, PRD_UNR_SUCCESS)); 1152 } 1153 PR_DEBUG(prd_unotlocked); 1154 return (page_retire_done(pp, PRD_UNR_CANTLOCK)); 1155 } 1156 1157 /* 1158 * Return a page to service by moving it from the retired_pages vnode 1159 * onto the freelist. 1160 * 1161 * Called from mmioctl_page_retire() on behalf of the FMA DE. 1162 * 1163 * Returns: 1164 * 1165 * - 0 if the page is unretired, 1166 * - EAGAIN if the pp can not be locked, 1167 * - EINVAL if the PA is whacko, and 1168 * - EIO if the pp is not retired. 1169 */ 1170 int 1171 page_unretire(uint64_t pa) 1172 { 1173 page_t *pp; 1174 1175 pp = page_numtopp_nolock(mmu_btop(pa)); 1176 if (pp == NULL) { 1177 return (page_retire_done(pp, PRD_INVALID_PA)); 1178 } 1179 1180 return (page_unretire_pp(pp, PR_UNR_FREE)); 1181 } 1182 1183 /* 1184 * Test a page to see if it is retired. If errors is non-NULL, the toxic 1185 * bits of the page are returned. Returns 0 on success, error code on failure. 1186 */ 1187 int 1188 page_retire_check_pp(page_t *pp, uint64_t *errors) 1189 { 1190 int rc; 1191 1192 if (PP_RETIRED(pp)) { 1193 PR_DEBUG(prd_checkhit); 1194 rc = 0; 1195 } else if (PP_PR_REQ(pp)) { 1196 PR_DEBUG(prd_checkmiss_pend); 1197 rc = EAGAIN; 1198 } else { 1199 PR_DEBUG(prd_checkmiss_noerr); 1200 rc = EIO; 1201 } 1202 1203 /* 1204 * We have magically arranged the bit values returned to fmd(1M) 1205 * to line up with the FMA, MCE, and UE bits of the page_t. 1206 */ 1207 if (errors) { 1208 uint64_t toxic = (uint64_t)(pp->p_toxic & PR_ERRMASK); 1209 if (toxic & PR_UE_SCRUBBED) { 1210 toxic &= ~PR_UE_SCRUBBED; 1211 toxic |= PR_UE; 1212 } 1213 *errors = toxic; 1214 } 1215 1216 return (rc); 1217 } 1218 1219 /* 1220 * Test to see if the page_t for a given PA is retired, and return the 1221 * hardware errors we have seen on the page if requested. 1222 * 1223 * Called from mmioctl_page_retire on behalf of the FMA DE. 1224 * 1225 * Returns: 1226 * 1227 * - 0 if the page is retired, 1228 * - EIO if the page is not retired and has no errors, 1229 * - EAGAIN if the page is not retired but is pending; and 1230 * - EINVAL if the PA is whacko. 1231 */ 1232 int 1233 page_retire_check(uint64_t pa, uint64_t *errors) 1234 { 1235 page_t *pp; 1236 1237 if (errors) { 1238 *errors = 0; 1239 } 1240 1241 pp = page_numtopp_nolock(mmu_btop(pa)); 1242 if (pp == NULL) { 1243 return (page_retire_done(pp, PRD_INVALID_PA)); 1244 } 1245 1246 return (page_retire_check_pp(pp, errors)); 1247 } 1248 1249 /* 1250 * Page retire self-test. For now, it always returns 0. 1251 */ 1252 int 1253 page_retire_test(void) 1254 { 1255 page_t *first, *pp, *cpp, *cpp2, *lpp; 1256 1257 /* 1258 * Tests the corner case where a large page can't be retired 1259 * because one of the constituent pages is locked. We mark 1260 * one page to be retired and try to retire it, and mark the 1261 * other page to be retired but don't try to retire it, so 1262 * that page_unlock() in the failure path will recurse and try 1263 * to retire THAT page. This is the worst possible situation 1264 * we can get ourselves into. 1265 */ 1266 memsegs_lock(0); 1267 pp = first = page_first(); 1268 do { 1269 if (pp->p_szc && PP_PAGEROOT(pp) == pp) { 1270 cpp = pp + 1; 1271 lpp = PP_ISFREE(pp)? pp : pp + 2; 1272 cpp2 = pp + 3; 1273 if (!page_trylock(lpp, pp == lpp? SE_EXCL : SE_SHARED)) 1274 continue; 1275 if (!page_trylock(cpp, SE_EXCL)) { 1276 page_unlock(lpp); 1277 continue; 1278 } 1279 1280 /* fails */ 1281 (void) page_retire(ptob(cpp->p_pagenum), PR_FMA); 1282 1283 page_unlock(lpp); 1284 page_unlock(cpp); 1285 (void) page_retire(ptob(cpp->p_pagenum), PR_FMA); 1286 (void) page_retire(ptob(cpp2->p_pagenum), PR_FMA); 1287 } 1288 } while ((pp = page_next(pp)) != first); 1289 memsegs_unlock(0); 1290 1291 return (0); 1292 } 1293