Lines Matching +full:one +full:- +full:to +full:- +full:many
13 .\" may be used to endorse or promote products derived from this software
17 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
68 pp. 295-303, June 1988.
94 The 4.3BSD UNIX kernel uses many memory allocation mechanisms,
99 patterns in the UNIX kernel and a hybrid strategy that is time-efficient
100 for small allocations and space-efficient for large allocations.
102 with a single easy-to-program interface,
106 relative to the current implementations.
117 and others include information to describe I/O operations.
120 In a user process such short-term
121 memory would be allocated on the run-time stack.
122 Because the kernel has a limited run-time stack,
123 it is not feasible to allocate even moderate blocks of memory on it.
127 it must allocate a one kilobye buffer to hold the name.
129 and really have to be allocated from dynamic memory.
136 a specialized memory allocation scheme has been written to handle it.
142 finding the oldest buffer, pushing its contents to disk if they are dirty,
143 and moving physical memory into or out of the buffer to create
153 to buffer disk blocks to other purposes.
157 A generalized memory allocator is needed to reduce the complexity
159 Rather than providing many semi-specialized ways of allocating memory,
162 programmers do not need to figure
163 out the most appropriate way to allocate memory.
168 To ease the task of understanding how to use it,
169 the memory allocator should have an interface similar to the interface
170 of the well-known memory allocator provided for
179 The free routine should take a pointer to the storage being freed,
184 The design specification for a kernel memory allocator is similar to,
185 but not identical to,
189 Good use of memory is measured by the amount of memory needed to hold
208 and a need to have a ready supply of free memory for future requests.
224 it is desirable to release unused memory in the ``required'' pool
225 rather than to hold it as is typically done with user processes.
228 a user process must do a system call to release memory.
235 because the kernel must allocate many data structure that user
236 processes can allocate cheaply on their run-time stack.
243 of frequently-used kernel interfaces will feel that they
244 cannot afford to use it as their primary memory allocator.
248 The kernel ends up with many different free lists of memory
261 .H 1 "Existing User-level Implementations
263 There are many different algorithms and
264 implementations of user-level memory allocators.
275 The 4.2BSD user-level memory allocator works by maintaining a set of lists
279 the size of the request is rounded up to the next power of two.
281 to the specified power of two and returned to the requester.
283 a block from the 64-sized list.
291 .H 1 "Considerations Unique to a Kernel Allocator
294 memory allocator for the kernel that do not apply to a user process
300 memory dedicated to the operating system is uninteresting to use.
303 These data structures never need to be
304 expanded to accommodate memory requests;
308 Although it could allocate static data structures to manage
311 The other alternative is to allocate data structures as they are needed.
314 structures and additional mechanisms to link them all together.
318 Unlike user processes that can only grow and shrink their heap at one end,
324 allocated to its address space.
326 kernel exactly corresponds to the set of pages that it is really using.
327 .FI "One day memory usage on a Berkeley time-sharing machine"
331 A final special condition that applies to the kernel is that
333 Each one of these uses of dynamic memory can be assigned a type.
349 \*(Lb shows the memory usage of the kernel over a one day period
358 and are typically for long-lived objects
359 such as buffers to hold the superblock for
361 Thus, a memory allocator only needs to be
369 and a slower but more-memory-efficient first-fit allocator.
371 Small allocations are done using the 4.2BSD power-of-two list strategy;
373 the list to use and the removal of an element if it is available,
375 Macros are provided to avoid the cost of a subroutine call.
377 made to the allocator itself.
379 the lists corresponding to large allocations are always empty.
383 The macro computes the list on which to place the request
386 considered to be a large allocation.
391 Because of the inefficiency of power-of-two allocation strategies
396 that showed that 95 to 98% of allocations are of size one kilobyte or less.
399 always requests a one kilobyte block.
405 \(dgTo understand why this number is $size 8 {2~times~pagesize}$ one
406 observes that the power-of-two algorithm yields sizes of 1, 2, 4, 8, \&...
409 Thus for allocations of sizes between one and two pages
412 that a difference emerges where the power-of-two algorithm will use
415 In 4.3BSD on the VAX, the (software) page size is one kilobyte,
418 Large allocations are first rounded up to be a multiple of the page size.
419 The allocator then uses a first-fit algorithm to find space in the
423 the power-of-two allocation strategy.
425 the memory pages are returned to the free memory pool,
426 and the address space is returned to the kernel address arena
429 Another technique to improve both the efficiency of memory utilization
431 is to cluster same-sized small allocations on a page.
432 When a list for a power-of-two allocation is empty,
442 The 4.2BSD user-level allocator stores the size of each block
445 require a power-of-two-sized block.
455 The reason is that many allocations in the kernel are for blocks of
457 These requests would be nearly doubled if the user-level strategy were used.
461 which is willing to wait for memory to become available,
463 that cannot wait for memory to become available.
464 Clients indicate their willingness (and ability) to wait with a flag
466 For clients that are willing to wait,
471 These clients must be prepared to cope with this
473 (usually by giving up and hoping to do better later).
477 Conversion from the old memory allocators to the new allocator
479 Many of the special purpose allocators have been eliminated.
485 Many of the special purpose memory allocators built on
496 Consequently applications that used to allocate network
497 buffers for their own uses have been switched over to using
501 it is hard to measure the amount of time spent allocating
503 The usual approach is to compile a kernel for profiling
505 implemented the old abstraction versus those that implement the new one.
506 The old routines are difficult to quantify because
507 individual routines were used for more than one purpose.
510 routine was used both to allocate one kilobyte memory blocks
511 and for its intended purpose of providing buffers to the filesystem.
518 of the running time of the multi-purpose routines that could
521 the time spent in the kernel could be accounted to memory allocation.
523 The new allocator is difficult to measure
528 we changed the macro always to call the memory allocation routine.
537 significant new run-time costs.
540 on a per-page basis.
541 This technique allows the most frequently requested sizes to be
544 with the allocator to four kilobytes of information
545 per megabyte of memory under management (with a one kilobyte page size).
548 Our next project is to convert many of the static
549 kernel tables to be dynamically allocated.
557 (although a limit will be retained to constrain runaway clients).
562 memory is never moved from one size list to another.
570 However, pages allocated to small requests are allocated once
577 However, we have been investigating ways to handle such problems
582 Since any given page has only one size of elements allocated from it,
583 the effect of the sorting would be to sort the list into distinct pages.
585 the page itself could be released back to the free pool so that
586 it could be allocated to another purpose.
589 most allocations are short-lived, lasting only for the duration of
591 As new allocations would be made from the page sorted to
594 allow pages later in the list to be freed.
599 That part of the system is expected to undergo major revision within
600 the next year or so, and it will probably be changed to use
609 it manages the constant-sized buffer pool.
610 We plan to merge the filesystem buffer cache into the virtual memory system's
612 This change will allow the size of the buffer pool to be changed
613 according to memory load,
619 we have made various versions of our allocator available to our test sites.
628 David Korn, Kiem-Phong Vo,
631 pp 489-506, June 1985.
636 pp 519-531, June 1985.
646 pp 1931-1946, 1978.