1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License, Version 1.0 only 6 * (the "License"). You may not use this file except in compliance 7 * with the License. 8 * 9 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 10 * or http://www.opensolaris.org/os/licensing. 11 * See the License for the specific language governing permissions 12 * and limitations under the License. 13 * 14 * When distributing Covered Code, include this CDDL HEADER in each 15 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 16 * If applicable, add the following below this CDDL HEADER, with the 17 * fields enclosed by brackets "[]" replaced with your own identifying 18 * information: Portions Copyright [yyyy] [name of copyright owner] 19 * 20 * CDDL HEADER END 21 */ 22 /* 23 * Copyright 2006 Sun Microsystems, Inc. All rights reserved. 24 * Use is subject to license terms. 25 */ 26 27 #pragma ident "%Z%%M% %I% %E% SMI" 28 29 /* 30 * Page Retire - Big Theory Statement. 31 * 32 * This file handles removing sections of faulty memory from use when the 33 * user land FMA Diagnosis Engine requests that a page be removed or when 34 * a CE or UE is detected by the hardware. 35 * 36 * In the bad old days, the kernel side of Page Retire did a lot of the work 37 * on its own. Now, with the DE keeping track of errors, the kernel side is 38 * rather simple minded on most platforms. 39 * 40 * Errors are all reflected to the DE, and after digesting the error and 41 * looking at all previously reported errors, the DE decides what should 42 * be done about the current error. If the DE wants a particular page to 43 * be retired, then the kernel page retire code is invoked via an ioctl. 44 * On non-FMA platforms, the ue_drain and ce_drain paths ends up calling 45 * page retire to handle the error. Since page retire is just a simple 46 * mechanism it doesn't need to differentiate between the different callers. 47 * 48 * The p_toxic field in the page_t is used to indicate which errors have 49 * occurred and what action has been taken on a given page. Because errors are 50 * reported without regard to the locked state of a page, no locks are used 51 * to SET the error bits in p_toxic. However, in order to clear the error 52 * bits, the page_t must be held exclusively locked. 53 * 54 * When page_retire() is called, it must be able to acquire locks, sleep, etc. 55 * It must not be called from high-level interrupt context. 56 * 57 * Depending on how the requested page is being used at the time of the retire 58 * request (and on the availability of sufficient system resources), the page 59 * may be retired immediately, or just marked for retirement later. For 60 * example, locked pages are marked, while free pages are retired. Multiple 61 * requests may be made to retire the same page, although there is no need 62 * to: once the p_toxic flags are set, the page will be retired as soon as it 63 * can be exclusively locked. 64 * 65 * The retire mechanism is driven centrally out of page_unlock(). To expedite 66 * the retirement of pages, further requests for SE_SHARED locks are denied 67 * as long as a page retirement is pending. In addition, as long as pages are 68 * pending retirement a background thread runs periodically trying to retire 69 * those pages. Pages which could not be retired while the system is running 70 * are scrubbed prior to rebooting to avoid latent errors on the next boot. 71 * 72 * UE pages without persistent errors are scrubbed and returned to service. 73 * Recidivist pages, as well as FMA-directed requests for retirement, result 74 * in the page being taken out of service. Once the decision is made to take 75 * a page out of service, the page is cleared, hashed onto the retired_pages 76 * vnode, marked as retired, and it is unlocked. No other requesters (except 77 * for unretire) are allowed to lock retired pages. 78 * 79 * The public routines return (sadly) 0 if they worked and a non-zero error 80 * value if something went wrong. This is done for the ioctl side of the 81 * world to allow errors to be reflected all the way out to user land. The 82 * non-zero values are explained in comments atop each function. 83 */ 84 85 /* 86 * Things to fix: 87 * 88 * 1. Cleanup SE_EWANTED. Since we're aggressive about trying to retire 89 * pages, we can use page_retire_pp() to replace SE_EWANTED and all 90 * the special delete_memory_thread() code just goes away. 91 * 92 * 2. Trying to retire non-relocatable kvp pages may result in a 93 * quagmire. This is because seg_kmem() no longer keeps its pages locked, 94 * and calls page_lookup() in the free path; since kvp pages are modified 95 * and don't have a usable backing store, page_retire() can't do anything 96 * with them, and we'll keep denying the lock to seg_kmem_free() in a 97 * vicious cycle. To prevent that, we don't deny locks to kvp pages, and 98 * hence only call page_retire_pp() from page_unlock() in the free path. 99 * Since most kernel pages are indefinitely held anyway, and don't 100 * participate in I/O, this is of little consequence. 101 * 102 * 3. Low memory situations will be interesting. If we don't have 103 * enough memory for page_relocate() to succeed, we won't be able to 104 * retire dirty pages; nobody will be able to push them out to disk 105 * either, since we aggressively deny the page lock. We could change 106 * fsflush so it can recognize this situation, grab the lock, and push 107 * the page out, where we'll catch it in the free path and retire it. 108 * 109 * 4. Beware of places that have code like this in them: 110 * 111 * if (! page_tryupgrade(pp)) { 112 * page_unlock(pp); 113 * while (! page_lock(pp, SE_EXCL, NULL, P_RECLAIM)) { 114 * / *NOTHING* / 115 * } 116 * } 117 * page_free(pp); 118 * 119 * The problem is that pp can change identity right after the 120 * page_unlock() call. In particular, page_retire() can step in 121 * there, change pp's identity, and hash pp onto the retired_vnode. 122 * 123 * Of course, other functions besides page_retire() can have the 124 * same effect. A kmem reader can waltz by, set up a mapping to the 125 * page, and then unlock the page. Page_free() will then go castors 126 * up. So if anybody is doing this, it's already a bug. 127 * 128 * 5. mdboot()'s call into page_retire_hunt() should probably be 129 * moved lower. Where the call is made now, we can get into trouble 130 * by scrubbing a kernel page that is then accessed later. 131 */ 132 133 #include <sys/types.h> 134 #include <sys/param.h> 135 #include <sys/systm.h> 136 #include <sys/mman.h> 137 #include <sys/vnode.h> 138 #include <sys/cmn_err.h> 139 #include <sys/ksynch.h> 140 #include <sys/thread.h> 141 #include <sys/disp.h> 142 #include <sys/ontrap.h> 143 #include <sys/vmsystm.h> 144 #include <sys/mem_config.h> 145 #include <sys/atomic.h> 146 #include <sys/callb.h> 147 #include <vm/page.h> 148 #include <vm/vm_dep.h> 149 #include <vm/as.h> 150 #include <vm/hat.h> 151 152 /* 153 * vnode for all pages which are retired from the VM system; 154 */ 155 vnode_t *retired_pages; 156 157 /* 158 * Background thread that wakes up periodically to try to retire pending 159 * pages. This prevents threads from becoming blocked indefinitely in 160 * page_lookup() or some other routine should the page(s) they are waiting 161 * on become eligible for social security. 162 */ 163 static void page_retire_thread(void); 164 static kthread_t *pr_thread_id; 165 static kcondvar_t pr_cv; 166 static kmutex_t pr_thread_mutex; 167 static clock_t pr_thread_shortwait; 168 static clock_t pr_thread_longwait; 169 170 /* 171 * Make a list of all of the pages that have been marked for retirement 172 * but are not yet retired. At system shutdown, we will scrub all of the 173 * pages in the list in case there are outstanding UEs. Then, we 174 * cross-check this list against the number of pages that are yet to be 175 * retired, and if we find inconsistencies, we scan every page_t in the 176 * whole system looking for any pages that need to be scrubbed for UEs. 177 * The background thread also uses this queue to determine which pages 178 * it should keep trying to retire. 179 */ 180 #ifdef DEBUG 181 #define PR_PENDING_QMAX 32 182 #else /* DEBUG */ 183 #define PR_PENDING_QMAX 256 184 #endif /* DEBUG */ 185 page_t *pr_pending_q[PR_PENDING_QMAX]; 186 kmutex_t pr_q_mutex; 187 188 /* 189 * Page retire global kstats 190 */ 191 struct page_retire_kstat { 192 kstat_named_t pr_retired; 193 kstat_named_t pr_requested; 194 kstat_named_t pr_requested_free; 195 kstat_named_t pr_enqueue_fail; 196 kstat_named_t pr_dequeue_fail; 197 kstat_named_t pr_pending; 198 kstat_named_t pr_failed; 199 kstat_named_t pr_failed_kernel; 200 kstat_named_t pr_limit; 201 kstat_named_t pr_limit_exceeded; 202 kstat_named_t pr_fma; 203 kstat_named_t pr_mce; 204 kstat_named_t pr_ue; 205 kstat_named_t pr_ue_cleared_retire; 206 kstat_named_t pr_ue_cleared_free; 207 kstat_named_t pr_ue_persistent; 208 kstat_named_t pr_unretired; 209 }; 210 211 static struct page_retire_kstat page_retire_kstat = { 212 { "pages_retired", KSTAT_DATA_UINT64}, 213 { "pages_retire_request", KSTAT_DATA_UINT64}, 214 { "pages_retire_request_free", KSTAT_DATA_UINT64}, 215 { "pages_notenqueued", KSTAT_DATA_UINT64}, 216 { "pages_notdequeued", KSTAT_DATA_UINT64}, 217 { "pages_pending", KSTAT_DATA_UINT64}, 218 { "pages_deferred", KSTAT_DATA_UINT64}, 219 { "pages_deferred_kernel", KSTAT_DATA_UINT64}, 220 { "pages_limit", KSTAT_DATA_UINT64}, 221 { "pages_limit_exceeded", KSTAT_DATA_UINT64}, 222 { "pages_fma", KSTAT_DATA_UINT64}, 223 { "pages_multiple_ce", KSTAT_DATA_UINT64}, 224 { "pages_ue", KSTAT_DATA_UINT64}, 225 { "pages_ue_cleared_retired", KSTAT_DATA_UINT64}, 226 { "pages_ue_cleared_freed", KSTAT_DATA_UINT64}, 227 { "pages_ue_persistent", KSTAT_DATA_UINT64}, 228 { "pages_unretired", KSTAT_DATA_UINT64}, 229 }; 230 231 static kstat_t *page_retire_ksp = NULL; 232 233 #define PR_INCR_KSTAT(stat) \ 234 atomic_add_64(&(page_retire_kstat.stat.value.ui64), 1) 235 #define PR_DECR_KSTAT(stat) \ 236 atomic_add_64(&(page_retire_kstat.stat.value.ui64), -1) 237 238 #define PR_KSTAT_RETIRED_CE (page_retire_kstat.pr_mce.value.ui64) 239 #define PR_KSTAT_RETIRED_FMA (page_retire_kstat.pr_fma.value.ui64) 240 #define PR_KSTAT_RETIRED_NOTUE (PR_KSTAT_RETIRED_CE + PR_KSTAT_RETIRED_FMA) 241 #define PR_KSTAT_PENDING (page_retire_kstat.pr_pending.value.ui64) 242 #define PR_KSTAT_EQFAIL (page_retire_kstat.pr_enqueue_fail.value.ui64) 243 #define PR_KSTAT_DQFAIL (page_retire_kstat.pr_dequeue_fail.value.ui64) 244 245 /* 246 * Limit the number of multiple CE page retires. 247 * The default is 0.1% of physmem, or 1 in 1000 pages. This is set in 248 * basis points, where 100 basis points equals one percent. 249 */ 250 #define MCE_BPT 10 251 uint64_t max_pages_retired_bps = MCE_BPT; 252 #define PAGE_RETIRE_LIMIT ((physmem * max_pages_retired_bps) / 10000) 253 254 /* 255 * Control over the verbosity of page retirement. 256 * 257 * When set to zero (the default), no messages will be printed. 258 * When set to one, summary messages will be printed. 259 * When set > one, all messages will be printed. 260 * 261 * A value of one will trigger detailed messages for retirement operations, 262 * and is intended as a platform tunable for processors where FMA's DE does 263 * not run (e.g., spitfire). Values > one are intended for debugging only. 264 */ 265 int page_retire_messages = 0; 266 267 /* 268 * Control whether or not we return scrubbed UE pages to service. 269 * By default we do not since FMA wants to run its diagnostics first 270 * and then ask us to unretire the page if it passes. Non-FMA platforms 271 * may set this to zero so we will only retire recidivist pages. It should 272 * not be changed by the user. 273 */ 274 int page_retire_first_ue = 1; 275 276 /* 277 * Master enable for page retire. This prevents a CE or UE early in boot 278 * from trying to retire a page before page_retire_init() has finished 279 * setting things up. This is internal only and is not a tunable! 280 */ 281 static int pr_enable = 0; 282 283 extern struct vnode kvp; 284 285 #ifdef DEBUG 286 struct page_retire_debug { 287 int prd_dup; 288 int prd_noaction; 289 int prd_queued; 290 int prd_notqueued; 291 int prd_dequeue; 292 int prd_top; 293 int prd_locked; 294 int prd_reloc; 295 int prd_relocfail; 296 int prd_mod; 297 int prd_mod_late; 298 int prd_kern; 299 int prd_free; 300 int prd_noreclaim; 301 int prd_hashout; 302 int prd_fma; 303 int prd_uescrubbed; 304 int prd_uenotscrubbed; 305 int prd_mce; 306 int prd_prlocked; 307 int prd_prnotlocked; 308 int prd_prretired; 309 int prd_ulocked; 310 int prd_unotretired; 311 int prd_udestroy; 312 int prd_uhashout; 313 int prd_uunretired; 314 int prd_unotlocked; 315 int prd_checkhit; 316 int prd_checkmiss; 317 int prd_tctop; 318 int prd_tclocked; 319 int prd_hunt; 320 int prd_dohunt; 321 int prd_earlyhunt; 322 int prd_latehunt; 323 int prd_nofreedemote; 324 int prd_nodemote; 325 int prd_demoted; 326 } pr_debug; 327 328 #define PR_DEBUG(foo) ((pr_debug.foo)++) 329 330 /* 331 * A type histogram. We record the incidence of the various toxic 332 * flag combinations along with the interesting page attributes. The 333 * goal is to get as many combinations as we can while driving all 334 * pr_debug values nonzero (indicating we've exercised all possible 335 * code paths across all possible page types). Not all combinations 336 * will make sense -- e.g. PRT_MOD|PRT_KERNEL. 337 * 338 * pr_type offset bit encoding (when examining with a debugger): 339 * 340 * PRT_NAMED - 0x4 341 * PRT_KERNEL - 0x8 342 * PRT_FREE - 0x10 343 * PRT_MOD - 0x20 344 * PRT_FMA - 0x0 345 * PRT_MCE - 0x40 346 * PRT_UE - 0x80 347 */ 348 349 #define PRT_NAMED 0x01 350 #define PRT_KERNEL 0x02 351 #define PRT_FREE 0x04 352 #define PRT_MOD 0x08 353 #define PRT_FMA 0x00 /* yes, this is not a mistake */ 354 #define PRT_MCE 0x10 355 #define PRT_UE 0x20 356 #define PRT_ALL 0x3F 357 358 int pr_types[PRT_ALL+1]; 359 360 #define PR_TYPES(pp) { \ 361 int whichtype = 0; \ 362 if (pp->p_vnode) \ 363 whichtype |= PRT_NAMED; \ 364 if (PP_ISKVP(pp)) \ 365 whichtype |= PRT_KERNEL; \ 366 if (PP_ISFREE(pp)) \ 367 whichtype |= PRT_FREE; \ 368 if (hat_ismod(pp)) \ 369 whichtype |= PRT_MOD; \ 370 if (pp->p_toxic & PR_UE) \ 371 whichtype |= PRT_UE; \ 372 if (pp->p_toxic & PR_MCE) \ 373 whichtype |= PRT_MCE; \ 374 pr_types[whichtype]++; \ 375 } 376 377 int recl_calls; 378 int recl_mtbf = 3; 379 int reloc_calls; 380 int reloc_mtbf = 7; 381 int pr_calls; 382 int pr_mtbf = 15; 383 384 #define MTBF(v, f) (((++(v)) & (f)) != (f)) 385 386 #else /* DEBUG */ 387 388 #define PR_DEBUG(foo) /* nothing */ 389 #define PR_TYPES(foo) /* nothing */ 390 #define MTBF(v, f) (1) 391 392 #endif /* DEBUG */ 393 394 /* 395 * page_retire_done() - completion processing 396 * 397 * Used by the page_retire code for common completion processing. 398 * It keeps track of how many times a given result has happened, 399 * and writes out an occasional message. 400 * 401 * May be called with a NULL pp (PRD_INVALID_PA case). 402 */ 403 #define PRD_INVALID_KEY -1 404 #define PRD_SUCCESS 0 405 #define PRD_PENDING 1 406 #define PRD_FAILED 2 407 #define PRD_DUPLICATE 3 408 #define PRD_INVALID_PA 4 409 #define PRD_LIMIT 5 410 #define PRD_UE_SCRUBBED 6 411 #define PRD_UNR_SUCCESS 7 412 #define PRD_UNR_CANTLOCK 8 413 #define PRD_UNR_NOT 9 414 415 typedef struct page_retire_op { 416 int pr_key; /* one of the PRD_* defines from above */ 417 int pr_count; /* How many times this has happened */ 418 int pr_retval; /* return value */ 419 int pr_msglvl; /* message level - when to print */ 420 char *pr_message; /* Cryptic message for field service */ 421 } page_retire_op_t; 422 423 static page_retire_op_t page_retire_ops[] = { 424 /* key count retval msglvl message */ 425 {PRD_SUCCESS, 0, 0, 1, 426 "Page 0x%08x.%08x removed from service"}, 427 {PRD_PENDING, 0, EAGAIN, 2, 428 "Page 0x%08x.%08x will be retired on free"}, 429 {PRD_FAILED, 0, EAGAIN, 0, NULL}, 430 {PRD_DUPLICATE, 0, EBUSY, 2, 431 "Page 0x%08x.%08x already retired"}, 432 {PRD_INVALID_PA, 0, EINVAL, 2, 433 "PA 0x%08x.%08x is not a relocatable page"}, 434 {PRD_LIMIT, 0, 0, 1, 435 "Page 0x%08x.%08x not retired due to limit exceeded"}, 436 {PRD_UE_SCRUBBED, 0, 0, 1, 437 "Previously reported error on page 0x%08x.%08x cleared"}, 438 {PRD_UNR_SUCCESS, 0, 0, 1, 439 "Page 0x%08x.%08x returned to service"}, 440 {PRD_UNR_CANTLOCK, 0, EAGAIN, 2, 441 "Page 0x%08x.%08x could not be unretired"}, 442 {PRD_UNR_NOT, 0, EBADF, 2, 443 "Page 0x%08x.%08x is not retired"}, 444 {PRD_INVALID_KEY, 0, 0, 0, NULL} /* MUST BE LAST! */ 445 }; 446 447 /* 448 * print a message if page_retire_messages is true. 449 */ 450 #define PR_MESSAGE(debuglvl, msglvl, msg, pa) \ 451 { \ 452 uint64_t p = (uint64_t)pa; \ 453 if (page_retire_messages >= msglvl && msg != NULL) { \ 454 cmn_err(debuglvl, msg, \ 455 (uint32_t)(p >> 32), (uint32_t)p); \ 456 } \ 457 } 458 459 /* 460 * Note that multiple bits may be set in a single settoxic operation. 461 * May be called without the page locked. 462 */ 463 void 464 page_settoxic(page_t *pp, uchar_t bits) 465 { 466 atomic_or_8(&pp->p_toxic, bits); 467 } 468 469 /* 470 * Note that multiple bits may cleared in a single clrtoxic operation. 471 * Must be called with the page exclusively locked to prevent races which 472 * may attempt to retire a page without any toxic bits set. 473 */ 474 void 475 page_clrtoxic(page_t *pp, uchar_t bits) 476 { 477 ASSERT(PAGE_EXCL(pp)); 478 atomic_and_8(&pp->p_toxic, ~bits); 479 } 480 481 /* 482 * Prints any page retire messages to the user, and decides what 483 * error code is appropriate for the condition reported. 484 */ 485 static int 486 page_retire_done(page_t *pp, int code) 487 { 488 page_retire_op_t *prop; 489 uint64_t pa = 0; 490 int i; 491 492 if (pp != NULL) { 493 pa = mmu_ptob((uint64_t)pp->p_pagenum); 494 } 495 496 prop = NULL; 497 for (i = 0; page_retire_ops[i].pr_key != PRD_INVALID_KEY; i++) { 498 if (page_retire_ops[i].pr_key == code) { 499 prop = &page_retire_ops[i]; 500 break; 501 } 502 } 503 504 #ifdef DEBUG 505 if (page_retire_ops[i].pr_key == PRD_INVALID_KEY) { 506 cmn_err(CE_PANIC, "page_retire_done: Invalid opcode %d", code); 507 } 508 #endif 509 510 ASSERT(prop->pr_key == code); 511 512 prop->pr_count++; 513 514 PR_MESSAGE(CE_NOTE, prop->pr_msglvl, prop->pr_message, pa); 515 if (pp != NULL) { 516 page_settoxic(pp, PR_MSG); 517 } 518 519 return (prop->pr_retval); 520 } 521 522 /* 523 * On a reboot, our friend mdboot() wants to clear up any PP_PR_REQ() pages 524 * that we were not able to retire. On large machines, walking the complete 525 * page_t array and looking at every page_t takes too long. So, as a page is 526 * marked toxic, we track it using a list that can be processed at reboot 527 * time. page_retire_enqueue() will do its best to try to avoid duplicate 528 * entries, but if we get too many errors at once the queue can overflow, 529 * in which case we will end up walking every page_t as a last resort. 530 * The background thread also makes use of this queue to find which pages 531 * are pending retirement. 532 */ 533 static void 534 page_retire_enqueue(page_t *pp) 535 { 536 int nslot = -1; 537 int i; 538 539 mutex_enter(&pr_q_mutex); 540 541 /* 542 * Check to make sure retire hasn't already dequeued it. 543 * In the meantime if the page was cleaned up, no need 544 * to enqueue it. 545 */ 546 if (PP_RETIRED(pp) || pp->p_toxic == 0) { 547 mutex_exit(&pr_q_mutex); 548 PR_DEBUG(prd_noaction); 549 return; 550 } 551 552 for (i = 0; i < PR_PENDING_QMAX; i++) { 553 if (pr_pending_q[i] == pp) { 554 mutex_exit(&pr_q_mutex); 555 PR_DEBUG(prd_dup); 556 return; 557 } else if (nslot == -1 && pr_pending_q[i] == NULL) { 558 nslot = i; 559 } 560 } 561 562 PR_INCR_KSTAT(pr_pending); 563 564 if (nslot != -1) { 565 pr_pending_q[nslot] = pp; 566 PR_DEBUG(prd_queued); 567 } else { 568 PR_INCR_KSTAT(pr_enqueue_fail); 569 PR_DEBUG(prd_notqueued); 570 } 571 mutex_exit(&pr_q_mutex); 572 } 573 574 static void 575 page_retire_dequeue(page_t *pp) 576 { 577 int i; 578 579 mutex_enter(&pr_q_mutex); 580 581 for (i = 0; i < PR_PENDING_QMAX; i++) { 582 if (pr_pending_q[i] == pp) { 583 pr_pending_q[i] = NULL; 584 break; 585 } 586 } 587 588 if (i == PR_PENDING_QMAX) { 589 PR_INCR_KSTAT(pr_dequeue_fail); 590 } 591 592 PR_DECR_KSTAT(pr_pending); 593 PR_DEBUG(prd_dequeue); 594 595 mutex_exit(&pr_q_mutex); 596 } 597 598 /* 599 * Act like page_destroy(), but instead of freeing the page, hash it onto 600 * the retired_pages vnode, and mark it retired. 601 * 602 * For fun, we try to scrub the page until it's squeaky clean. 603 * availrmem is adjusted here. 604 */ 605 static void 606 page_retire_destroy(page_t *pp) 607 { 608 u_offset_t off = (u_offset_t)((uintptr_t)pp); 609 610 ASSERT(PAGE_EXCL(pp)); 611 ASSERT(!PP_ISFREE(pp)); 612 ASSERT(pp->p_szc == 0); 613 ASSERT(!hat_page_is_mapped(pp)); 614 ASSERT(!pp->p_vnode); 615 616 page_clr_all_props(pp); 617 pagescrub(pp, 0, MMU_PAGESIZE); 618 619 pp->p_next = NULL; 620 pp->p_prev = NULL; 621 if (page_hashin(pp, retired_pages, off, NULL) == 0) { 622 cmn_err(CE_PANIC, "retired page %p hashin failed", (void *)pp); 623 } 624 625 page_settoxic(pp, PR_RETIRED); 626 page_clrtoxic(pp, PR_BUSY); 627 page_retire_dequeue(pp); 628 PR_INCR_KSTAT(pr_retired); 629 630 if (pp->p_toxic & PR_FMA) { 631 PR_INCR_KSTAT(pr_fma); 632 } else if (pp->p_toxic & PR_UE) { 633 PR_INCR_KSTAT(pr_ue); 634 } else { 635 PR_INCR_KSTAT(pr_mce); 636 } 637 638 mutex_enter(&freemem_lock); 639 availrmem--; 640 mutex_exit(&freemem_lock); 641 642 page_unlock(pp); 643 } 644 645 /* 646 * Check whether the number of pages which have been retired already exceeds 647 * the maximum allowable percentage of memory which may be retired. 648 * 649 * Returns 1 if the limit has been exceeded. 650 */ 651 static int 652 page_retire_limit(void) 653 { 654 if (PR_KSTAT_RETIRED_NOTUE >= (uint64_t)PAGE_RETIRE_LIMIT) { 655 PR_INCR_KSTAT(pr_limit_exceeded); 656 return (1); 657 } 658 659 return (0); 660 } 661 662 #define MSG_DM "Data Mismatch occurred at PA 0x%08x.%08x" \ 663 "[ 0x%x != 0x%x ] while attempting to clear previously " \ 664 "reported error; page removed from service" 665 666 #define MSG_UE "Uncorrectable Error occurred at PA 0x%08x.%08x while " \ 667 "attempting to clear previously reported error; page removed " \ 668 "from service" 669 670 /* 671 * Attempt to clear a UE from a page. 672 * Returns 1 if the error has been successfully cleared. 673 */ 674 static int 675 page_clear_transient_ue(page_t *pp) 676 { 677 caddr_t kaddr; 678 uint8_t rb, wb; 679 uint64_t pa; 680 uint32_t pa_hi, pa_lo; 681 on_trap_data_t otd; 682 int errors = 0; 683 int i; 684 685 ASSERT(PAGE_EXCL(pp)); 686 ASSERT(PP_PR_REQ(pp)); 687 ASSERT(pp->p_szc == 0); 688 ASSERT(!hat_page_is_mapped(pp)); 689 690 /* 691 * Clear the page and attempt to clear the UE. If we trap 692 * on the next access to the page, we know the UE has recurred. 693 */ 694 pagescrub(pp, 0, PAGESIZE); 695 696 /* 697 * Map the page and write a bunch of bit patterns to compare 698 * what we wrote with what we read back. This isn't a perfect 699 * test but it should be good enough to catch most of the 700 * recurring UEs. If this fails to catch a recurrent UE, we'll 701 * retire the page the next time we see a UE on the page. 702 */ 703 kaddr = ppmapin(pp, PROT_READ|PROT_WRITE, (caddr_t)-1); 704 705 pa = ptob((uint64_t)page_pptonum(pp)); 706 pa_hi = (uint32_t)(pa >> 32); 707 pa_lo = (uint32_t)pa; 708 709 /* 710 * Fill the page with each (0x00 - 0xFF] bit pattern, flushing 711 * the cache in between reading and writing. We do this under 712 * on_trap() protection to avoid recursion. 713 */ 714 if (on_trap(&otd, OT_DATA_EC)) { 715 PR_MESSAGE(CE_WARN, 1, MSG_UE, pa); 716 errors = 1; 717 } else { 718 for (wb = 0xff; wb > 0; wb--) { 719 for (i = 0; i < PAGESIZE; i++) { 720 kaddr[i] = wb; 721 } 722 723 sync_data_memory(kaddr, PAGESIZE); 724 725 for (i = 0; i < PAGESIZE; i++) { 726 rb = kaddr[i]; 727 if (rb != wb) { 728 /* 729 * We had a mismatch without a trap. 730 * Uh-oh. Something is really wrong 731 * with this system. 732 */ 733 if (page_retire_messages) { 734 cmn_err(CE_WARN, MSG_DM, 735 pa_hi, pa_lo, rb, wb); 736 } 737 errors = 1; 738 goto out; /* double break */ 739 } 740 } 741 } 742 } 743 out: 744 no_trap(); 745 ppmapout(kaddr); 746 747 return (errors ? 0 : 1); 748 } 749 750 /* 751 * Try to clear a page_t with a single UE. If the UE was transient, it is 752 * returned to service, and we return 1. Otherwise we return 0 meaning 753 * that further processing is required to retire the page. 754 */ 755 static int 756 page_retire_transient_ue(page_t *pp) 757 { 758 ASSERT(PAGE_EXCL(pp)); 759 ASSERT(!hat_page_is_mapped(pp)); 760 761 /* 762 * If this page is a repeat offender, retire him under the 763 * "two strikes and you're out" rule. The caller is responsible 764 * for scrubbing the page to try to clear the error. 765 */ 766 if (pp->p_toxic & PR_UE_SCRUBBED) { 767 PR_INCR_KSTAT(pr_ue_persistent); 768 return (0); 769 } 770 771 if (page_clear_transient_ue(pp)) { 772 /* 773 * We set the PR_SCRUBBED_UE bit; if we ever see this 774 * page again, we will retire it, no questions asked. 775 */ 776 page_settoxic(pp, PR_UE_SCRUBBED); 777 778 if (page_retire_first_ue) { 779 PR_INCR_KSTAT(pr_ue_cleared_retire); 780 return (0); 781 } else { 782 PR_INCR_KSTAT(pr_ue_cleared_free); 783 784 page_clrtoxic(pp, PR_UE | PR_MCE | PR_MSG | PR_BUSY); 785 page_retire_dequeue(pp); 786 787 /* LINTED: CONSTCOND */ 788 VN_DISPOSE(pp, B_FREE, 1, kcred); 789 return (1); 790 } 791 } 792 793 PR_INCR_KSTAT(pr_ue_persistent); 794 return (0); 795 } 796 797 /* 798 * Update the statistics dynamically when our kstat is read. 799 */ 800 static int 801 page_retire_kstat_update(kstat_t *ksp, int rw) 802 { 803 struct page_retire_kstat *pr; 804 805 if (ksp == NULL) 806 return (EINVAL); 807 808 switch (rw) { 809 810 case KSTAT_READ: 811 pr = (struct page_retire_kstat *)ksp->ks_data; 812 ASSERT(pr == &page_retire_kstat); 813 pr->pr_limit.value.ui64 = PAGE_RETIRE_LIMIT; 814 return (0); 815 816 case KSTAT_WRITE: 817 return (EACCES); 818 819 default: 820 return (EINVAL); 821 } 822 /*NOTREACHED*/ 823 } 824 825 /* 826 * Initialize the page retire mechanism: 827 * 828 * - Establish the correctable error retire limit. 829 * - Initialize locks. 830 * - Build the retired_pages vnode. 831 * - Set up the kstats. 832 * - Fire off the background thread. 833 * - Tell page_tryretire() it's OK to start retiring pages. 834 */ 835 void 836 page_retire_init(void) 837 { 838 const fs_operation_def_t retired_vnodeops_template[] = {NULL, NULL}; 839 struct vnodeops *vops; 840 841 const uint_t page_retire_ndata = 842 sizeof (page_retire_kstat) / sizeof (kstat_named_t); 843 844 ASSERT(page_retire_ksp == NULL); 845 846 if (max_pages_retired_bps <= 0) { 847 max_pages_retired_bps = MCE_BPT; 848 } 849 850 mutex_init(&pr_q_mutex, NULL, MUTEX_DEFAULT, NULL); 851 852 retired_pages = vn_alloc(KM_SLEEP); 853 if (vn_make_ops("retired_pages", retired_vnodeops_template, &vops)) { 854 cmn_err(CE_PANIC, 855 "page_retired_init: can't make retired vnodeops"); 856 } 857 vn_setops(retired_pages, vops); 858 859 if ((page_retire_ksp = kstat_create("unix", 0, "page_retire", 860 "misc", KSTAT_TYPE_NAMED, page_retire_ndata, 861 KSTAT_FLAG_VIRTUAL)) == NULL) { 862 cmn_err(CE_WARN, "kstat_create for page_retire failed"); 863 } else { 864 page_retire_ksp->ks_data = (void *)&page_retire_kstat; 865 page_retire_ksp->ks_update = page_retire_kstat_update; 866 kstat_install(page_retire_ksp); 867 } 868 869 pr_thread_shortwait = 23 * hz; 870 pr_thread_longwait = 1201 * hz; 871 mutex_init(&pr_thread_mutex, NULL, MUTEX_DEFAULT, NULL); 872 cv_init(&pr_cv, NULL, CV_DEFAULT, NULL); 873 pr_thread_id = thread_create(NULL, 0, page_retire_thread, NULL, 0, &p0, 874 TS_RUN, minclsyspri); 875 876 pr_enable = 1; 877 } 878 879 /* 880 * page_retire_hunt() callback for the retire thread. 881 */ 882 static void 883 page_retire_thread_cb(page_t *pp) 884 { 885 PR_DEBUG(prd_tctop); 886 if (!PP_ISKVP(pp) && page_trylock(pp, SE_EXCL)) { 887 PR_DEBUG(prd_tclocked); 888 page_unlock(pp); 889 } 890 } 891 892 /* 893 * page_retire_hunt() callback for mdboot(). 894 * 895 * It is necessary to scrub any failing pages prior to reboot in order to 896 * prevent a latent error trap from occurring on the next boot. 897 */ 898 void 899 page_retire_mdboot_cb(page_t *pp) 900 { 901 /* 902 * Don't scrub the kernel, since we might still need it, unless 903 * we have UEs on the page, in which case we have nothing to lose. 904 */ 905 if (!PP_ISKVP(pp) || PP_TOXIC(pp)) { 906 pp->p_selock = -1; /* pacify ASSERTs */ 907 PP_CLRFREE(pp); 908 pagescrub(pp, 0, PAGESIZE); 909 pp->p_selock = 0; 910 } 911 pp->p_toxic = 0; 912 } 913 914 /* 915 * Hunt down any pages in the system that have not yet been retired, invoking 916 * the provided callback function on each of them. 917 */ 918 void 919 page_retire_hunt(void (*callback)(page_t *)) 920 { 921 page_t *pp; 922 page_t *first; 923 uint64_t tbr, found; 924 int i; 925 926 PR_DEBUG(prd_hunt); 927 928 if (PR_KSTAT_PENDING == 0) { 929 return; 930 } 931 932 PR_DEBUG(prd_dohunt); 933 934 found = 0; 935 mutex_enter(&pr_q_mutex); 936 937 tbr = PR_KSTAT_PENDING; 938 939 for (i = 0; i < PR_PENDING_QMAX; i++) { 940 if ((pp = pr_pending_q[i]) != NULL) { 941 mutex_exit(&pr_q_mutex); 942 callback(pp); 943 mutex_enter(&pr_q_mutex); 944 found++; 945 } 946 } 947 948 if (PR_KSTAT_EQFAIL == PR_KSTAT_DQFAIL && found == tbr) { 949 mutex_exit(&pr_q_mutex); 950 PR_DEBUG(prd_earlyhunt); 951 return; 952 } 953 mutex_exit(&pr_q_mutex); 954 955 PR_DEBUG(prd_latehunt); 956 957 /* 958 * We've lost track of a page somewhere. Hunt it down. 959 */ 960 memsegs_lock(0); 961 pp = first = page_first(); 962 do { 963 if (PP_PR_REQ(pp)) { 964 callback(pp); 965 if (++found == tbr) { 966 break; /* got 'em all */ 967 } 968 } 969 } while ((pp = page_next(pp)) != first); 970 memsegs_unlock(0); 971 } 972 973 /* 974 * The page_retire_thread loops forever, looking to see if there are 975 * pages still waiting to be retired. 976 */ 977 static void 978 page_retire_thread(void) 979 { 980 callb_cpr_t c; 981 982 CALLB_CPR_INIT(&c, &pr_thread_mutex, callb_generic_cpr, "page_retire"); 983 984 mutex_enter(&pr_thread_mutex); 985 for (;;) { 986 if (pr_enable && PR_KSTAT_PENDING) { 987 /* 988 * Sigh. It's SO broken how we have to try to shake 989 * loose the holder of the page. Since we have no 990 * idea who or what has it locked, we go bang on 991 * every door in the city to try to locate it. 992 */ 993 kmem_reap(); 994 seg_preap(); 995 page_retire_hunt(page_retire_thread_cb); 996 CALLB_CPR_SAFE_BEGIN(&c); 997 (void) cv_timedwait(&pr_cv, &pr_thread_mutex, 998 lbolt + pr_thread_shortwait); 999 CALLB_CPR_SAFE_END(&c, &pr_thread_mutex); 1000 } else { 1001 CALLB_CPR_SAFE_BEGIN(&c); 1002 (void) cv_timedwait(&pr_cv, &pr_thread_mutex, 1003 lbolt + pr_thread_longwait); 1004 CALLB_CPR_SAFE_END(&c, &pr_thread_mutex); 1005 } 1006 } 1007 /*NOTREACHED*/ 1008 } 1009 1010 /* 1011 * page_retire_pp() decides what to do with a failing page. 1012 * 1013 * When we get a free page (e.g. the scrubber or in the free path) life is 1014 * nice because the page is clean and marked free -- those always retire 1015 * nicely. From there we go by order of difficulty. If the page has data, 1016 * we attempt to relocate its contents to a suitable replacement page. If 1017 * that does not succeed, we look to see if it is clean. If after all of 1018 * this we have a clean, unmapped page (which we usually do!), we retire it. 1019 * If the page is not clean, we still process it regardless on a UE; for 1020 * CEs or FMA requests, we fail leaving the page in service. The page will 1021 * eventually be tried again later. We always return with the page unlocked 1022 * since we are called from page_unlock(). 1023 * 1024 * We don't call panic or do anything fancy down in here. Our boss the DE 1025 * gets paid handsomely to do his job of figuring out what to do when errors 1026 * occur. We just do what he tells us to do. 1027 */ 1028 static int 1029 page_retire_pp(page_t *pp) 1030 { 1031 int toxic; 1032 1033 ASSERT(PAGE_EXCL(pp)); 1034 ASSERT(pp->p_iolock_state == 0); 1035 ASSERT(pp->p_szc == 0); 1036 1037 PR_DEBUG(prd_top); 1038 PR_TYPES(pp); 1039 1040 toxic = pp->p_toxic; 1041 ASSERT(toxic & PR_REASONS); 1042 1043 if ((toxic & (PR_FMA | PR_MCE)) && !(toxic & PR_UE) && 1044 page_retire_limit()) { 1045 page_clrtoxic(pp, PR_FMA | PR_MCE | PR_MSG | PR_BUSY); 1046 page_retire_dequeue(pp); 1047 page_unlock(pp); 1048 return (page_retire_done(pp, PRD_LIMIT)); 1049 } 1050 1051 if (PP_ISFREE(pp)) { 1052 int dbgnoreclaim = MTBF(recl_calls, recl_mtbf) == 0; 1053 1054 PR_DEBUG(prd_free); 1055 1056 if (dbgnoreclaim || !page_reclaim(pp, NULL)) { 1057 PR_DEBUG(prd_noreclaim); 1058 PR_INCR_KSTAT(pr_failed); 1059 /* 1060 * page_reclaim() returns with `pp' unlocked when 1061 * it fails. 1062 */ 1063 if (dbgnoreclaim) 1064 page_unlock(pp); 1065 return (page_retire_done(pp, PRD_FAILED)); 1066 } 1067 } 1068 ASSERT(!PP_ISFREE(pp)); 1069 1070 if ((toxic & PR_UE) == 0 && pp->p_vnode && !PP_ISNORELOCKERNEL(pp) && 1071 MTBF(reloc_calls, reloc_mtbf)) { 1072 page_t *newpp; 1073 spgcnt_t count; 1074 1075 /* 1076 * If we can relocate the page, great! newpp will go 1077 * on without us, and everything is fine. Regardless 1078 * of whether the relocation succeeds, we are still 1079 * going to take `pp' around back and shoot it. 1080 */ 1081 newpp = NULL; 1082 if (page_relocate(&pp, &newpp, 0, 0, &count, NULL) == 0) { 1083 PR_DEBUG(prd_reloc); 1084 page_unlock(newpp); 1085 ASSERT(hat_page_getattr(pp, P_MOD) == 0); 1086 } else { 1087 PR_DEBUG(prd_relocfail); 1088 } 1089 } 1090 1091 if (hat_ismod(pp)) { 1092 PR_DEBUG(prd_mod); 1093 PR_INCR_KSTAT(pr_failed); 1094 page_unlock(pp); 1095 return (page_retire_done(pp, PRD_FAILED)); 1096 } 1097 1098 if (PP_ISKVP(pp)) { 1099 PR_DEBUG(prd_kern); 1100 PR_INCR_KSTAT(pr_failed_kernel); 1101 page_unlock(pp); 1102 return (page_retire_done(pp, PRD_FAILED)); 1103 } 1104 1105 if (pp->p_lckcnt || pp->p_cowcnt) { 1106 PR_DEBUG(prd_locked); 1107 PR_INCR_KSTAT(pr_failed); 1108 page_unlock(pp); 1109 return (page_retire_done(pp, PRD_FAILED)); 1110 } 1111 1112 (void) hat_pageunload(pp, HAT_FORCE_PGUNLOAD); 1113 ASSERT(!hat_page_is_mapped(pp)); 1114 1115 /* 1116 * If the page is modified, and was not relocated; we can't 1117 * retire it without dropping data on the floor. We have to 1118 * recheck after unloading since the dirty bit could have been 1119 * set since we last checked. 1120 */ 1121 if (hat_ismod(pp)) { 1122 PR_DEBUG(prd_mod_late); 1123 PR_INCR_KSTAT(pr_failed); 1124 page_unlock(pp); 1125 return (page_retire_done(pp, PRD_FAILED)); 1126 } 1127 1128 if (pp->p_vnode) { 1129 PR_DEBUG(prd_hashout); 1130 page_hashout(pp, NULL); 1131 } 1132 ASSERT(!pp->p_vnode); 1133 1134 /* 1135 * The problem page is locked, demoted, unmapped, not free, 1136 * hashed out, and not COW or mlocked (whew!). 1137 * 1138 * Now we select our ammunition, take it around back, and shoot it. 1139 */ 1140 if (toxic & PR_UE) { 1141 if (page_retire_transient_ue(pp)) { 1142 PR_DEBUG(prd_uescrubbed); 1143 return (page_retire_done(pp, PRD_UE_SCRUBBED)); 1144 } else { 1145 PR_DEBUG(prd_uenotscrubbed); 1146 page_retire_destroy(pp); 1147 return (page_retire_done(pp, PRD_SUCCESS)); 1148 } 1149 } else if (toxic & PR_FMA) { 1150 PR_DEBUG(prd_fma); 1151 page_retire_destroy(pp); 1152 return (page_retire_done(pp, PRD_SUCCESS)); 1153 } else if (toxic & PR_MCE) { 1154 PR_DEBUG(prd_mce); 1155 page_retire_destroy(pp); 1156 return (page_retire_done(pp, PRD_SUCCESS)); 1157 } 1158 panic("page_retire_pp: bad toxic flags %d", toxic); 1159 /*NOTREACHED*/ 1160 } 1161 1162 /* 1163 * Try to retire a page when we stumble onto it in the page lock routines. 1164 */ 1165 void 1166 page_tryretire(page_t *pp) 1167 { 1168 ASSERT(PAGE_EXCL(pp)); 1169 1170 if (!pr_enable) { 1171 page_unlock(pp); 1172 return; 1173 } 1174 1175 /* 1176 * If the page is a big page, try to break it up. 1177 * 1178 * If there are other bad pages besides `pp', they will be 1179 * recursively retired for us thanks to a bit of magic. 1180 * If the page is a small page with errors, try to retire it. 1181 */ 1182 if (pp->p_szc > 0) { 1183 if (PP_ISFREE(pp) && !page_try_demote_free_pages(pp)) { 1184 page_unlock(pp); 1185 PR_DEBUG(prd_nofreedemote); 1186 return; 1187 } else if (!page_try_demote_pages(pp)) { 1188 page_unlock(pp); 1189 PR_DEBUG(prd_nodemote); 1190 return; 1191 } 1192 PR_DEBUG(prd_demoted); 1193 page_unlock(pp); 1194 } else { 1195 (void) page_retire_pp(pp); 1196 } 1197 } 1198 1199 /* 1200 * page_retire() - the front door in to retire a page. 1201 * 1202 * Ideally, page_retire() would instantly retire the requested page. 1203 * Unfortunately, some pages are locked or otherwise tied up and cannot be 1204 * retired right away. To deal with that, bits are set in p_toxic of the 1205 * page_t. An attempt is made to lock the page; if the attempt is successful, 1206 * we instantly unlock the page counting on page_unlock() to notice p_toxic 1207 * is nonzero and to call back into page_retire_pp(). Success is determined 1208 * by looking to see whether the page has been retired once it has been 1209 * unlocked. 1210 * 1211 * Returns: 1212 * 1213 * - 0 on success, 1214 * - EINVAL when the PA is whacko, 1215 * - EBUSY if the page is already retired, or 1216 * - EAGAIN if the page could not be _immediately_ retired. 1217 */ 1218 int 1219 page_retire(uint64_t pa, uchar_t reason) 1220 { 1221 page_t *pp; 1222 1223 ASSERT(reason & PR_REASONS); /* there must be a reason */ 1224 ASSERT(!(reason & ~PR_REASONS)); /* but no other bits */ 1225 1226 pp = page_numtopp_nolock(mmu_btop(pa)); 1227 if (pp == NULL) { 1228 PR_MESSAGE(CE_WARN, 1, "Cannot schedule clearing of error on" 1229 " page 0x%08x.%08x; page is not relocatable memory", pa); 1230 return (page_retire_done(pp, PRD_INVALID_PA)); 1231 } 1232 if (PP_RETIRED(pp)) { 1233 return (page_retire_done(pp, PRD_DUPLICATE)); 1234 } 1235 1236 if (reason & PR_UE) { 1237 PR_MESSAGE(CE_NOTE, 1, "Scheduling clearing of error on" 1238 " page 0x%08x.%08x", pa); 1239 } else { 1240 PR_MESSAGE(CE_NOTE, 1, "Scheduling removal of" 1241 " page 0x%08x.%08x", pa); 1242 } 1243 page_settoxic(pp, reason); 1244 page_retire_enqueue(pp); 1245 1246 /* 1247 * And now for some magic. 1248 * 1249 * We marked this page toxic up above. All there is left to do is 1250 * to try to lock the page and then unlock it. The page lock routines 1251 * will intercept the page and retire it if they can. If the page 1252 * cannot be locked, 's okay -- page_unlock() will eventually get it, 1253 * or the background thread, until then the lock routines will deny 1254 * further locks on it. 1255 */ 1256 if (MTBF(pr_calls, pr_mtbf) && page_trylock(pp, SE_EXCL)) { 1257 PR_DEBUG(prd_prlocked); 1258 page_unlock(pp); 1259 } else { 1260 PR_DEBUG(prd_prnotlocked); 1261 } 1262 1263 if (PP_RETIRED(pp)) { 1264 PR_DEBUG(prd_prretired); 1265 return (0); 1266 } else { 1267 cv_signal(&pr_cv); 1268 PR_INCR_KSTAT(pr_failed); 1269 1270 if (pp->p_toxic & PR_MSG) { 1271 return (page_retire_done(pp, PRD_FAILED)); 1272 } else { 1273 return (page_retire_done(pp, PRD_PENDING)); 1274 } 1275 } 1276 } 1277 1278 /* 1279 * Take a retired page off the retired-pages vnode and clear the toxic flags. 1280 * If "free" is nonzero, lock it and put it back on the freelist. If "free" 1281 * is zero, the caller already holds SE_EXCL lock so we simply unretire it 1282 * and don't do anything else with it. 1283 * 1284 * Any unretire messages are printed from this routine. 1285 * 1286 * Returns 0 if page pp was unretired; else an error code. 1287 */ 1288 int 1289 page_unretire_pp(page_t *pp, int free) 1290 { 1291 /* 1292 * To be retired, a page has to be hashed onto the retired_pages vnode 1293 * and have PR_RETIRED set in p_toxic. 1294 */ 1295 if (free == 0 || page_try_reclaim_lock(pp, SE_EXCL, SE_RETIRED)) { 1296 ASSERT(PAGE_EXCL(pp)); 1297 PR_DEBUG(prd_ulocked); 1298 if (!PP_RETIRED(pp)) { 1299 PR_DEBUG(prd_unotretired); 1300 page_unlock(pp); 1301 return (page_retire_done(pp, PRD_UNR_NOT)); 1302 } 1303 1304 PR_MESSAGE(CE_NOTE, 1, "unretiring retired" 1305 " page 0x%08x.%08x", mmu_ptob((uint64_t)pp->p_pagenum)); 1306 if (pp->p_toxic & PR_FMA) { 1307 PR_DECR_KSTAT(pr_fma); 1308 } else if (pp->p_toxic & PR_UE) { 1309 PR_DECR_KSTAT(pr_ue); 1310 } else { 1311 PR_DECR_KSTAT(pr_mce); 1312 } 1313 page_clrtoxic(pp, PR_ALLFLAGS); 1314 1315 if (free) { 1316 PR_DEBUG(prd_udestroy); 1317 page_destroy(pp, 0); 1318 } else { 1319 PR_DEBUG(prd_uhashout); 1320 page_hashout(pp, NULL); 1321 } 1322 1323 mutex_enter(&freemem_lock); 1324 availrmem++; 1325 mutex_exit(&freemem_lock); 1326 1327 PR_DEBUG(prd_uunretired); 1328 PR_DECR_KSTAT(pr_retired); 1329 PR_INCR_KSTAT(pr_unretired); 1330 return (page_retire_done(pp, PRD_UNR_SUCCESS)); 1331 } 1332 PR_DEBUG(prd_unotlocked); 1333 return (page_retire_done(pp, PRD_UNR_CANTLOCK)); 1334 } 1335 1336 /* 1337 * Return a page to service by moving it from the retired_pages vnode 1338 * onto the freelist. 1339 * 1340 * Called from mmioctl_page_retire() on behalf of the FMA DE. 1341 * 1342 * Returns: 1343 * 1344 * - 0 if the page is unretired, 1345 * - EAGAIN if the pp can not be locked, 1346 * - EINVAL if the PA is whacko, and 1347 * - EBADF if the pp is not retired. 1348 */ 1349 int 1350 page_unretire(uint64_t pa) 1351 { 1352 page_t *pp; 1353 1354 pp = page_numtopp_nolock(mmu_btop(pa)); 1355 if (pp == NULL) { 1356 return (page_retire_done(pp, PRD_INVALID_PA)); 1357 } 1358 1359 return (page_unretire_pp(pp, 1)); 1360 } 1361 1362 /* 1363 * Test a page to see if it is retired. If errors is non-NULL, the toxic 1364 * bits of the page are returned. Returns 0 on success, error code on failure. 1365 */ 1366 int 1367 page_retire_check_pp(page_t *pp, uint64_t *errors) 1368 { 1369 int rc; 1370 1371 if (PP_RETIRED(pp)) { 1372 PR_DEBUG(prd_checkhit); 1373 rc = 0; 1374 } else { 1375 PR_DEBUG(prd_checkmiss); 1376 rc = EAGAIN; 1377 } 1378 1379 /* 1380 * We have magically arranged the bit values returned to fmd(1M) 1381 * to line up with the FMA, MCE, and UE bits of the page_t. 1382 */ 1383 if (errors) { 1384 uint64_t toxic = (uint64_t)(pp->p_toxic & PR_ERRMASK); 1385 if (toxic & PR_UE_SCRUBBED) { 1386 toxic &= ~PR_UE_SCRUBBED; 1387 toxic |= PR_UE; 1388 } 1389 *errors = toxic; 1390 } 1391 1392 return (rc); 1393 } 1394 1395 /* 1396 * Test to see if the page_t for a given PA is retired, and return the 1397 * hardware errors we have seen on the page if requested. 1398 * 1399 * Called from mmioctl_page_retire on behalf of the FMA DE. 1400 * 1401 * Returns: 1402 * 1403 * - 0 if the page is retired, 1404 * - EAGAIN if it is not, and 1405 * - EINVAL if the PA is whacko. 1406 */ 1407 int 1408 page_retire_check(uint64_t pa, uint64_t *errors) 1409 { 1410 page_t *pp; 1411 1412 if (errors) { 1413 *errors = 0; 1414 } 1415 1416 pp = page_numtopp_nolock(mmu_btop(pa)); 1417 if (pp == NULL) { 1418 return (page_retire_done(pp, PRD_INVALID_PA)); 1419 } 1420 1421 return (page_retire_check_pp(pp, errors)); 1422 } 1423 1424 /* 1425 * Page retire self-test. For now, it always returns 0. 1426 */ 1427 int 1428 page_retire_test(void) 1429 { 1430 page_t *first, *pp, *cpp, *cpp2, *lpp; 1431 1432 /* 1433 * Tests the corner case where a large page can't be retired 1434 * because one of the constituent pages is locked. We mark 1435 * one page to be retired and try to retire it, and mark the 1436 * other page to be retired but don't try to retire it, so 1437 * that page_unlock() in the failure path will recurse and try 1438 * to retire THAT page. This is the worst possible situation 1439 * we can get ourselves into. 1440 */ 1441 memsegs_lock(0); 1442 pp = first = page_first(); 1443 do { 1444 if (pp->p_szc && PP_PAGEROOT(pp) == pp) { 1445 cpp = pp + 1; 1446 lpp = PP_ISFREE(pp)? pp : pp + 2; 1447 cpp2 = pp + 3; 1448 if (!page_trylock(lpp, pp == lpp? SE_EXCL : SE_SHARED)) 1449 continue; 1450 if (!page_trylock(cpp, SE_EXCL)) { 1451 page_unlock(lpp); 1452 continue; 1453 } 1454 page_settoxic(cpp, PR_FMA | PR_BUSY); 1455 page_settoxic(cpp2, PR_FMA); 1456 page_tryretire(cpp); /* will fail */ 1457 page_unlock(lpp); 1458 (void) page_retire(cpp->p_pagenum, PR_FMA); 1459 (void) page_retire(cpp2->p_pagenum, PR_FMA); 1460 } 1461 } while ((pp = page_next(pp)) != first); 1462 memsegs_unlock(0); 1463 1464 return (0); 1465 } 1466