#
2e370a5c |
| 26-May-2009 |
Oleksandr Tymoshenko <gonzo@FreeBSD.org> |
Merge from HEAD
|
#
b522d2c9 |
| 17-May-2009 |
Kip Macy <kmacy@FreeBSD.org> |
correct range in comment pointed out by alc
|
#
e1279022 |
| 17-May-2009 |
Kip Macy <kmacy@FreeBSD.org> |
update vm map comment
pointed out by Larry Rosenman
|
#
b6d82b1a |
| 16-May-2009 |
Kip Macy <kmacy@FreeBSD.org> |
Increase default kernel map to 512GB
I briefly discussed this with alc. It could lead to problems for greater than 64GB. However, that seems unlikely in practice.
|
Revision tags: release/7.2.0_cvs, release/7.2.0, release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0 |
|
#
8136b726 |
| 09-Jul-2008 |
Alan Cox <alc@FreeBSD.org> |
Eliminate pmap_growkernel()'s dependence on create_pagetables() preallocating page directory pages from VM_MIN_KERNEL_ADDRESS through the end of the kernel's bss. Specifically, the dependence was in
Eliminate pmap_growkernel()'s dependence on create_pagetables() preallocating page directory pages from VM_MIN_KERNEL_ADDRESS through the end of the kernel's bss. Specifically, the dependence was in pmap_growkernel()'s one- time initialization of kernel_vm_end, not in its main body. (I could not, however, resist the urge to optimize the main body.)
Reduce the number of preallocated page directory pages to just those needed to support NKPT page table pages. (In fact, this allows me to revert a couple of my earlier changes to create_pagetables().)
show more ...
|
#
13e00584 |
| 05-Jul-2008 |
Alan Cox <alc@FreeBSD.org> |
Increase the kernel map's size to 7GB, making room for a kmem map of size greater than 4GB. (Auto-sizing will set the ceiling on the kmem map size to 4.2GB.)
|
#
db0a9105 |
| 03-Jul-2008 |
Alan Cox <alc@FreeBSD.org> |
Increase the ceiling on the kmem map's size to 3.6GB. Also, define the ceiling as a fraction of the kernel map's size rather than an absolute quantity. Thus, scaling of the kmem map's size will be
Increase the ceiling on the kmem map's size to 3.6GB. Also, define the ceiling as a fraction of the kernel map's size rather than an absolute quantity. Thus, scaling of the kmem map's size will be automatic with changes to the kernel map's size.
show more ...
|
#
17e21388 |
| 30-Jun-2008 |
Alan Cox <alc@FreeBSD.org> |
Document the layout of the address space, borrowing heavily from http://lists.freebsd.org/pipermail/freebsd-amd64/2005-July/005578.html
|
#
ce3cb388 |
| 29-Jun-2008 |
Alan Cox <alc@FreeBSD.org> |
Strictly speaking, the definition of VM_MAX_KERNEL_ADDRESS is wrong. However, in practice, the error (currently) makes no difference because the computation performed by KVADDR() hides the error. T
Strictly speaking, the definition of VM_MAX_KERNEL_ADDRESS is wrong. However, in practice, the error (currently) makes no difference because the computation performed by KVADDR() hides the error. This revision fixes the error.
Also, eliminate a (now) unused definition.
show more ...
|
#
bd4328d3 |
| 23-Jun-2008 |
Alan Cox <alc@FreeBSD.org> |
Ensure that KERNBASE is no less than the virtual address -2GB.
|
Revision tags: release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0 |
|
#
b8e7fc24 |
| 27-Dec-2007 |
Alan Cox <alc@FreeBSD.org> |
Add configuration knobs for the superpage reservation system. Initially, the reservation will only be enabled on amd64.
|
#
7bfda801 |
| 25-Sep-2007 |
Alan Cox <alc@FreeBSD.org> |
Change the management of cached pages (PQ_CACHE) in two fundamental ways:
(1) Cached pages are no longer kept in the object's resident page splay tree and memq. Instead, they are kept in a separate
Change the management of cached pages (PQ_CACHE) in two fundamental ways:
(1) Cached pages are no longer kept in the object's resident page splay tree and memq. Instead, they are kept in a separate per-object splay tree of cached pages. However, access to this new per-object splay tree is synchronized by the _free_ page queues lock, not to be confused with the heavily contended page queues lock. Consequently, a cached page can be reclaimed by vm_page_alloc(9) without acquiring the object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon. Specifically, they observed the page daemon consuming a great deal of CPU time because of pages bouncing back and forth between the cache queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of this problem turned out to be a deadlock avoidance strategy employed when selecting a cached page to reclaim in vm_page_select_cache(). However, the root cause was really that reclaiming a cached page required the acquisition of an object lock while the page queues lock was already held. Thus, this change addresses the problem at its root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and memq was, in effect, optimizing for the uncommon case. Cached pages are reclaimed far, far more often than they are reactivated. Instead, this change makes reclamation cheaper, especially in terms of synchronization overhead, and reactivation more expensive, because reactivated pages will have to be reentered into the object's primary splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical memory allocator's buddy queues, increasing the likelihood that large allocations of contiguous physical memory (i.e., superpages) will succeed.
Finally, as a result of this change long-standing restrictions on when and where a cached page can be reclaimed and returned by vm_page_alloc(9) are eliminated. Specifically, calls to vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and return a formerly cached page. Consequently, a call to malloc(9) specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@, Justin Husted @ Isilon, peter@, tegge@ Tested by: an earlier version by kris@ Approved by: re (kensmith)
show more ...
|
#
5b4a3e94 |
| 04-Jun-2007 |
Alan Cox <alc@FreeBSD.org> |
Add the machine-specific definitions for configuring the new physical memory allocator.
Set the size of phys_avail[] and dump_avail[] using one of these definitions.
Approved by: re
|
#
04a18977 |
| 05-May-2007 |
Alan Cox <alc@FreeBSD.org> |
Define every architecture as either VM_PHYSSEG_DENSE or VM_PHYSSEG_SPARSE depending on whether the physical address space is densely or sparsely populated with memory. The effect of this definition
Define every architecture as either VM_PHYSSEG_DENSE or VM_PHYSSEG_SPARSE depending on whether the physical address space is densely or sparsely populated with memory. The effect of this definition is to determine which of two implementations of vm_page_array and PHYS_TO_VM_PAGE() is used. The legacy implementation is obtained by defining VM_PHYSSEG_DENSE, and a new implementation that trades off time for space is obtained by defining VM_PHYSSEG_SPARSE. For now, all architectures except for ia64 and sparc64 define VM_PHYSSEG_DENSE. Defining VM_PHYSSEG_SPARSE on ia64 allows the entirety of my Itanium 2's memory to be used. Previously, only the first 1 GB could be used. Defining VM_PHYSSEG_SPARSE on sparc64 allows USIIIi-based systems to boot without crashing.
This change is a combination of Nathan Whitehorn's patch and my own work in perforce.
Discussed with: kmacy, marius, Nathan Whitehorn PR: 112194
show more ...
|
#
0e5179e4 |
| 21-Apr-2007 |
Stephane E. Potvin <sepotvin@FreeBSD.org> |
Add support for specifying a minimal size for vm.kmem_size in the loader via vm.kmem_size_min. Useful when using ZFS to make sure that vm.kmem size will be at least 256mb (for example) without forcin
Add support for specifying a minimal size for vm.kmem_size in the loader via vm.kmem_size_min. Useful when using ZFS to make sure that vm.kmem size will be at least 256mb (for example) without forcing a particular value via vm.kmem_size.
Approved by: njl (mentor) Reviewed by: alc
show more ...
|
Revision tags: release/6.2.0_cvs, release/6.2.0, release/5.5.0_cvs, release/5.5.0, release/6.1.0_cvs, release/6.1.0, release/6.0.0_cvs, release/6.0.0, release/5.4.0_cvs, release/5.4.0, release/4.11.0_cvs, release/4.11.0, release/5.3.0_cvs, release/5.3.0 |
|
#
3904b13f |
| 27-Oct-2004 |
Peter Wemm <peter@FreeBSD.org> |
Raise MAXDSIZ from 8G to 32G. The old limit was just an arbitary choice that was greater than 4G. I originally used the same values as i386 in order to save opening a new PML4 page slot, but in the
Raise MAXDSIZ from 8G to 32G. The old limit was just an arbitary choice that was greater than 4G. I originally used the same values as i386 in order to save opening a new PML4 page slot, but in the day of gigabytes of memory, worrying about a 4K page seems futile. Moving from 8 to 32G moves the page to a different index, it doesn't increase the number of pages used.
show more ...
|
Revision tags: release/4.10.0_cvs, release/4.10.0, release/5.2.1_cvs, release/5.2.1, release/5.2.0_cvs, release/5.2.0 |
|
#
4d4a286c |
| 07-Dec-2003 |
Alan Cox <alc@FreeBSD.org> |
Increase VM_KMEM_SIZE_MAX from 200MB to 400MB.
Discussed with: peter
|
#
fcfe57d6 |
| 08-Nov-2003 |
Peter Wemm <peter@FreeBSD.org> |
Update the graffiti.
|
Revision tags: release/4.9.0_cvs, release/4.9.0 |
|
#
cc3112f1 |
| 25-Sep-2003 |
Peter Wemm <peter@FreeBSD.org> |
Re-raise the default datasize and stacksize now that the 32 bit exec support can clip it to sensible values.
|
#
725bc173 |
| 23-Sep-2003 |
Peter Wemm <peter@FreeBSD.org> |
Oops. back out last commit. The data and stack limits are used by the 32 bit binary stuff. 32 bit binaries do not like it much when the kernel tries hard to put things above the 8GB mark.
I have a
Oops. back out last commit. The data and stack limits are used by the 32 bit binary stuff. 32 bit binaries do not like it much when the kernel tries hard to put things above the 8GB mark.
I have a work-in-progress to fix this properly, but I didn't want to burn anybody with this yet.
show more ...
|
#
24789c54 |
| 23-Sep-2003 |
Peter Wemm <peter@FreeBSD.org> |
Increase the default data size limit from 512MB to 8GB. Increase default stack limit from 64MB to 512MB.
|
#
bf8ca114 |
| 10-Jul-2003 |
Peter Wemm <peter@FreeBSD.org> |
Fix the VADDR() macros to use either KVADDR() or UVADDR(), depending on the implied sign extension. The single unified VADDR() macro was not able to avoid sign extending the VM_MAXUSER_ADDRESS/USRST
Fix the VADDR() macros to use either KVADDR() or UVADDR(), depending on the implied sign extension. The single unified VADDR() macro was not able to avoid sign extending the VM_MAXUSER_ADDRESS/USRSTACK values. Be explicit about UVADDR() (positive address space) and KVADDR() (kernel negative address space) to make mistakes show up more spectacularly.
Increase user VM space from 1/2TB (512GB) to 128TB.
show more ...
|
Revision tags: release/5.1.0_cvs, release/5.1.0 |
|
#
d9cd1af4 |
| 23-May-2003 |
Peter Wemm <peter@FreeBSD.org> |
Typo fix. oops.
Submitted by: jmallett Approved by: re (blanket amd64/*)
|
#
3c9a3c9c |
| 23-May-2003 |
Peter Wemm <peter@FreeBSD.org> |
Major pmap rework to take advantage of the larger address space on amd64 systems. Of note: - Implement a direct mapped region using 2MB pages. This eliminates the need for temporary mappings when
Major pmap rework to take advantage of the larger address space on amd64 systems. Of note: - Implement a direct mapped region using 2MB pages. This eliminates the need for temporary mappings when getting ptes. This supports up to 512GB of physical memory for now. This should be enough for a while. - Implement a 4-tier page table system. Most of the infrastructure is there for 128TB of userland virtual address space, but only 512GB is presently enabled due to a mystery bug somewhere. The design of this was heavily inspired by the alpha pmap.c. - The kernel is moved into the negative address space(!). - The kernel has 2GB of KVM available. - Provide a uma memory allocator to use the direct map region to take advantage of the 2MB TLBs. - Fixed some assumptions in the bus_space macros about the ability to fit virtual addresses in an 'int'.
Notable missing things: - pmap_growkernel() should be able to grow to 512GB of KVM by expanding downwards below kernbase. The kernel must be at the top 2GB of the negative address space because of gcc code generation strategies. - need to fix the >512GB user vm code.
Approved by: re (blanket)
show more ...
|
#
afa88623 |
| 01-May-2003 |
Peter Wemm <peter@FreeBSD.org> |
Commit MD parts of a loosely functional AMD64 port. This is based on a heavily stripped down FreeBSD/i386 (brutally stripped down actually) to attempt to get a stable base to start from. There is a
Commit MD parts of a loosely functional AMD64 port. This is based on a heavily stripped down FreeBSD/i386 (brutally stripped down actually) to attempt to get a stable base to start from. There is a lot missing still. Worth noting: - The kernel runs at 1GB in order to cheat with the pmap code. pmap uses a variation of the PAE code in order to avoid having to worry about 4 levels of page tables yet. - It boots in 64 bit "long mode" with a tiny trampoline embedded in the i386 loader. This simplifies locore.s greatly. - There are still quite a few fragments of i386-specific code that have not been translated yet, and some that I cheated and wrote dumb C versions of (bcopy etc). - It has both int 0x80 for syscalls (but using registers for argument passing, as is native on the amd64 ABI), and the 'syscall' instruction for syscalls. int 0x80 preserves all registers, 'syscall' does not. - I have tried to minimize looking at the NetBSD code, except in a couple of places (eg: to find which register they use to replace the trashed %rcx register in the syscall instruction). As a result, there is not a lot of similarity. I did look at NetBSD a few times while debugging to get some ideas about what I might have done wrong in my first attempt.
show more ...
|