#
7d3d462b |
| 13-Nov-2012 |
Neel Natu <neel@FreeBSD.org> |
IFC @ r242940
|
#
79f62ed6 |
| 10-Nov-2012 |
Alfred Perlstein <alfred@FreeBSD.org> |
Allow maxusers to scale on machines with large address space.
Some hooks are added to clamp down maxusers and nmbclusters for small address space systems.
VM_MAX_AUTOTUNE_MAXUSERS - the max maxuser
Allow maxusers to scale on machines with large address space.
Some hooks are added to clamp down maxusers and nmbclusters for small address space systems.
VM_MAX_AUTOTUNE_MAXUSERS - the max maxusers that will be autotuned based on physical memory. VM_MAX_AUTOTUNE_NMBCLUSTERS - max nmbclusters based on physical memory.
These are set to the old values on i386 to preserve the clamping that was being done to all arches.
Another macro VM_AUTOTUNE_NMBCLUSTERS is provided to allow an override for the calculation on a MD basis. Currently no arch defines this.
Reviewed by: peter MFC after: 2 weeks
show more ...
|
#
23090366 |
| 04-Nov-2012 |
Simon J. Gerraty <sjg@FreeBSD.org> |
Sync from head
|
#
de720122 |
| 15-Jul-2012 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Merge head r236710 through r238467.
|
#
6cf87ec8 |
| 13-Jul-2012 |
Xin LI <delphij@FreeBSD.org> |
IFC @238412.
|
#
b652778e |
| 11-Jul-2012 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r238370
|
#
d69ae412 |
| 22-Jun-2012 |
Konstantin Belousov <kib@FreeBSD.org> |
Enable shared page on i386, now it has a use for vdso_timehands.
MFC after: 1 month
|
Revision tags: release/8.3.0_cvs, release/8.3.0 |
|
#
8fa0b743 |
| 23-Jan-2012 |
Xin LI <delphij@FreeBSD.org> |
IFC @230489 (pending review).
|
#
80dbff4e |
| 04-Jan-2012 |
Sean Bruno <sbruno@FreeBSD.org> |
IFC to head to catch up the bhyve branch
Approved by: grehan@
|
Revision tags: release/9.0.0 |
|
#
c5ecbfb4 |
| 10-Dec-2011 |
Alan Cox <alc@FreeBSD.org> |
Avoid the possibility of integer overflow in the calculation of VM_KMEM_SIZE_MAX. Specifically, if the user/kernel address space split was changed such that the kernel address space was greater than
Avoid the possibility of integer overflow in the calculation of VM_KMEM_SIZE_MAX. Specifically, if the user/kernel address space split was changed such that the kernel address space was greater than or equal to 2 GB, then overflow would occur.
PR: 161721 MFC after: 3 weeks
show more ...
|
#
b2aa562e |
| 13-May-2011 |
Attilio Rao <attilio@FreeBSD.org> |
MFC
|
#
cfb00e5a |
| 13-May-2011 |
Matthew D Fleming <mdf@FreeBSD.org> |
Move the ZERO_REGION_SIZE to a machine-dependent file, as on many architectures (i386, for example) the virtual memory space may be constrained enough that 2MB is a large chunk. Use 64K for arches o
Move the ZERO_REGION_SIZE to a machine-dependent file, as on many architectures (i386, for example) the virtual memory space may be constrained enough that 2MB is a large chunk. Use 64K for arches other than amd64 and ia64, with special handling for sparc64 due to differing hardware.
Also commit the comment changes to kmem_init_zero_region() that I missed due to not saving the file. (Darn the unfamiliar development environment).
Arch maintainers, please feel free to adjust ZERO_REGION_SIZE as you see fit.
Requested by: alc MFC after: 1 week MFC with: r221853
show more ...
|
#
e9a3f785 |
| 23-Mar-2011 |
Alan Cox <alc@FreeBSD.org> |
Modestly increase the maximum allowed size of the kmem map on i386. Also, express this new maximum as a fraction of the kernel's address space size rather than a constant so that increasing KVA_PAGES
Modestly increase the maximum allowed size of the kmem map on i386. Also, express this new maximum as a fraction of the kernel's address space size rather than a constant so that increasing KVA_PAGES will automatically increase this maximum. As a side-effect of this change, kern.maxvnodes will automatically increase by a proportional amount.
While I'm here ensure that this change doesn't result in an unintended increase in maxpipekva on i386. Calculate maxpipekva based upon the size of the kernel address space and the amount of physical memory instead of the size of the kmem map. The memory backing pipes is not allocated from the kmem map. It is allocated from its own submap of the kernel map. In short, it has no real connection to the kmem map. (In fact, the commit messages for the maxpipekva auto-sizing talk about using the kernel map size, cf. r117325 and r117391, even though the implementation actually used the kmem map size.) Although the calculation is now done differently, the resulting value for maxpipekva should remain almost the same on i386. However, on amd64, the value will be reduced by 2/3. This is intentional. The recent change to VM_KMEM_SIZE_SCALE on amd64 for the benefit of ZFS also had the unnecessary side-effect of increasing maxpipekva. This change is effectively restoring maxpipekva on amd64 to its prior value.
Eliminate init_param3() since it is no longer used.
show more ...
|
#
9b4fcf85 |
| 18-Feb-2011 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Merge svn+ssh://svn.freebsd.org/base/head@218816
|
Revision tags: release/7.4.0_cvs, release/8.2.0_cvs, release/7.4.0, release/8.2.0 |
|
#
50a57dfb |
| 09-Jan-2011 |
Konstantin Belousov <kib@FreeBSD.org> |
Move repeated MAXSLP definition from machine/vmparam.h to sys/vmmeter.h. Update the outdated comments describing MAXSLP and the process selection algorithm for swap out.
Comments wording and reviewe
Move repeated MAXSLP definition from machine/vmparam.h to sys/vmmeter.h. Update the outdated comments describing MAXSLP and the process selection algorithm for swap out.
Comments wording and reviewed by: alc
show more ...
|
#
b17f9ad2 |
| 16-Aug-2010 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Merge svn+ssh://svn.freebsd.org/base/head@211344
|
#
a3870a18 |
| 27-Jul-2010 |
John Baldwin <jhb@FreeBSD.org> |
Very rough first cut at NUMA support for the physical page allocator. For now it uses a very dumb first-touch allocation policy. This will change in the future. - Each architecture indicates the ma
Very rough first cut at NUMA support for the physical page allocator. For now it uses a very dumb first-touch allocation policy. This will change in the future. - Each architecture indicates the maximum number of supported memory domains via a new VM_NDOMAIN parameter in <machine/vmparam.h>. - Each cpu now has a PCPU_GET(domain) member to indicate the memory domain a CPU belongs to. Domain values are dense and numbered from 0. - When a platform supports multiple domains, the default freelist (VM_FREELIST_DEFAULT) is split up into N freelists, one for each domain. The MD code is required to populate an array of mem_affinity structures. Each entry in the array defines a range of memory (start and end) and a domain for the range. Multiple entries may be present for a single domain. The list is terminated by an entry where all fields are zero. This array of structures is used to split up phys_avail[] regions that fall in VM_FREELIST_DEFAULT into per-domain freelists. - Each memory domain has a separate lookup-array of freelists that is used when fulfulling a physical memory allocation. Right now the per-domain freelists are listed in a round-robin order for each domain. In the future a table such as the ACPI SLIT table may be used to order the per-domain lookup lists based on the penalty for each memory domain relative to a specific domain. The lookup lists may be examined via a new vm.phys.lookup_lists sysctl. - The first-touch policy is implemented by using PCPU_GET(domain) to pick a lookup list when allocating memory.
Reviewed by: alc
show more ...
|
Revision tags: release/8.1.0_cvs, release/8.1.0, release/7.3.0_cvs, release/7.3.0, release/8.0.0_cvs, release/8.0.0 |
|
#
10b3b545 |
| 17-Sep-2009 |
Dag-Erling Smørgrav <des@FreeBSD.org> |
Merge from head
|
Revision tags: release/7.2.0_cvs, release/7.2.0 |
|
#
9c797940 |
| 13-Apr-2009 |
Oleksandr Tymoshenko <gonzo@FreeBSD.org> |
- Merge from HEAD
|
#
beb3c3a9 |
| 05-Apr-2009 |
Alan Cox <alc@FreeBSD.org> |
Retire VM_PROT_READ_IS_EXEC. It was intended to be a micro-optimization, but I see no benefit from it today.
VM_PROT_READ_IS_EXEC was only intended for use on processors that do not distinguish bet
Retire VM_PROT_READ_IS_EXEC. It was intended to be a micro-optimization, but I see no benefit from it today.
VM_PROT_READ_IS_EXEC was only intended for use on processors that do not distinguish between read and execute permission. On an mmap(2) or mprotect(2), it automatically added execute permission if the caller specified permissions included read permission. The hope was that this would reduce the number of vm map entries needed to implement an address space because there would be fewer neighboring vm map entries that differed only in the presence or absence of VM_PROT_EXECUTE. (See vm/vm_mmap.c revision 1.56.)
Today, I don't see any real applications that benefit from VM_PROT_READ_IS_EXEC. In any case, vm map entries are now organized as a self-adjusting binary search tree instead of an ordered list. So, the need for coalescing vm map entries is not as great as it once was.
show more ...
|
Revision tags: release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0 |
|
#
93ee134a |
| 15-Aug-2008 |
Kip Macy <kmacy@FreeBSD.org> |
Integrate support for xen in to i386 common code.
MFC after: 1 month
|
#
fdcd29b5 |
| 26-Mar-2008 |
Alan Cox <alc@FreeBSD.org> |
Enable the automatic creation of superpage reservations.
|
Revision tags: release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0 |
|
#
b8e7fc24 |
| 27-Dec-2007 |
Alan Cox <alc@FreeBSD.org> |
Add configuration knobs for the superpage reservation system. Initially, the reservation will only be enabled on amd64.
|
#
7bfda801 |
| 25-Sep-2007 |
Alan Cox <alc@FreeBSD.org> |
Change the management of cached pages (PQ_CACHE) in two fundamental ways:
(1) Cached pages are no longer kept in the object's resident page splay tree and memq. Instead, they are kept in a separate
Change the management of cached pages (PQ_CACHE) in two fundamental ways:
(1) Cached pages are no longer kept in the object's resident page splay tree and memq. Instead, they are kept in a separate per-object splay tree of cached pages. However, access to this new per-object splay tree is synchronized by the _free_ page queues lock, not to be confused with the heavily contended page queues lock. Consequently, a cached page can be reclaimed by vm_page_alloc(9) without acquiring the object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon. Specifically, they observed the page daemon consuming a great deal of CPU time because of pages bouncing back and forth between the cache queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of this problem turned out to be a deadlock avoidance strategy employed when selecting a cached page to reclaim in vm_page_select_cache(). However, the root cause was really that reclaiming a cached page required the acquisition of an object lock while the page queues lock was already held. Thus, this change addresses the problem at its root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and memq was, in effect, optimizing for the uncommon case. Cached pages are reclaimed far, far more often than they are reactivated. Instead, this change makes reclamation cheaper, especially in terms of synchronization overhead, and reactivation more expensive, because reactivated pages will have to be reentered into the object's primary splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical memory allocator's buddy queues, increasing the likelihood that large allocations of contiguous physical memory (i.e., superpages) will succeed.
Finally, as a result of this change long-standing restrictions on when and where a cached page can be reclaimed and returned by vm_page_alloc(9) are eliminated. Specifically, calls to vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and return a formerly cached page. Consequently, a call to malloc(9) specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@, Justin Husted @ Isilon, peter@, tegge@ Tested by: an earlier version by kris@ Approved by: re (kensmith)
show more ...
|
#
e5c45405 |
| 05-Jun-2007 |
Alan Cox <alc@FreeBSD.org> |
Add the machine-specific definitions for configuring the new physical memory allocator.
Set the size of phys_avail[] and dump_avail[] using one of these definitions.
Approved by: re
|