Home
last modified time | relevance | path

Searched +full:memory +full:- +full:to +full:- +full:memory (Results 1 – 25 of 1096) sorted by relevance

12345678910>>...44

/linux/Documentation/admin-guide/mm/
H A Dmemory-hotplug.rst2 Memory Hot(Un)Plug
5 This document describes generic Linux support for memory hot(un)plug with
13 Memory hot(un)plug allows for increasing and decreasing the size of physical
14 memory available to a machine at runtime. In the simplest case, it consists of
18 Memory hot(un)plug is used for various purposes:
20 - The physical memory available to a machine can be adjusted at runtime, up- or
21 downgrading the memory capacity. This dynamic memory resizing, sometimes
22 referred to as "capacity on demand", is frequently used with virtual machines
25 - Replacing hardware, such as DIMMs or whole NUMA nodes, without downtime. One
26 example is replacing failing memory modules.
[all …]
H A Dconcepts.rst5 The memory management in Linux is a complex system that evolved over the
6 years and included more and more functionality to support a variety of
7 systems from MMU-less microcontrollers to supercomputers. The memory
12 address to a physical address.
16 Virtual Memory Primer
19 The physical memory in a computer system is a limited resource and
20 even for systems that support memory hotplug there is a hard limit on
21 the amount of memory that can be installed. The physical memory is not
27 All this makes dealing directly with physical memory quite complex and
28 to avoid this complexity a concept of virtual memory was developed.
[all …]
H A Dnumaperf.rst2 NUMA Memory Performance
8 Some platforms may have multiple types of memory attached to a compute
9 node. These disparate memory ranges may share some characteristics, such
13 A system supports such heterogeneous memory by grouping each memory type
15 characteristics. Some memory may share the same node as a CPU, and others
16 are provided as memory only nodes. While memory only nodes do not provide
17 CPUs, they may still be local to one or more compute nodes relative to
19 nodes with local memory and a memory only node for each of compute node::
21 +------------------+ +------------------+
22 | Compute Node 0 +-----+ Compute Node 1 |
[all …]
H A Dnuma_memory_policy.rst2 NUMA Memory Policy
5 What is NUMA Memory Policy?
8 In the Linux kernel, "memory policy" determines from which node the kernel will
9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
11 The current memory policy support was added to Linux 2.6 around May 2004. This
12 document attempts to describe the concepts and APIs of the 2.6 memory policy
15 Memory policies should not be confused with cpusets
16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
18 memory may be allocated by a set of processes. Memory policies are a
[all …]
/linux/Documentation/admin-guide/cgroup-v1/
H A Dmemory.rst2 Memory Resource Controller
8 here but make sure to check the current code if you need a deeper
12 The Memory Resource Controller has generically been referred to as the
13 memory controller in this document. Do not confuse memory controller
14 used here with the memory controller that is used in hardware.
17 When we mention a cgroup (cgroupfs's directory) with memory controller,
18 we call it "memory cgroup". When you see git-log and source code, you'll
19 see patch's title and function names tend to use "memcg".
22 Benefits and Purpose of the memory controller
25 The memory controller isolates the memory behaviour of a group of tasks
[all …]
/linux/Documentation/mm/
H A Dmemory-model.rst1 .. SPDX-License-Identifier: GPL-2.0
4 Physical Memory Model
7 Physical memory in a system may be addressed in different ways. The
8 simplest case is when the physical memory starts at address 0 and
9 spans a contiguous range up to the maximal address. It could be,
13 different memory banks are attached to different CPUs.
15 Linux abstracts this diversity using one of the two memory models:
17 memory models it supports, what the default memory model is and
18 whether it is possible to manually override that default.
20 All the memory models track the status of physical page frames using
[all …]
H A Dhmm.rst2 Heterogeneous Memory Management (HMM)
5 Provide infrastructure and helpers to integrate non-conventional memory (device
6 memory like GPU on board memory) into regular kernel path, with the cornerstone
7 of this being specialized struct page for such memory (see sections 5 to 7 of
10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
11 allowing a device to transparently access program addresses coherently with
13 for the device. This is becoming mandatory to simplify the use of advanced
14 heterogeneous computing where GPU, DSP, or FPGA are used to perform various
18 related to using device specific memory allocators. In the second section, I
19 expose the hardware limitations that are inherent to many platforms. The third
[all …]
H A Dnuma.rst12 or more CPUs, local memory, and/or IO buses. For brevity and to
17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset
18 of the system--although some components necessary for a stand-alone SMP system
20 connected together with some sort of system interconnect--e.g., a crossbar or
21 point-to-point link are common types of NUMA system interconnects. Both of
22 these types of interconnects can be aggregated to create NUMA platforms with
26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
27 to and accessible from any CPU attached to any cell and cache coherency
30 Memory access time and effective memory bandwidth varies depending on how far
31 away the cell containing the CPU or IO bus making the memory access is from the
[all …]
/linux/Documentation/arch/arm64/
H A Dkdump.rst2 crashkernel memory reservation on arm64
7 Kdump mechanism is used to capture a corrupted kernel vmcore so that
8 it can be subsequently analyzed. In order to do this, a preliminarily
9 reserved memory is needed to pre-load the kdump kernel and boot such
12 That reserved memory for kdump is adapted to be able to minimally
19 Through the kernel parameters below, memory can be reserved accordingly
21 large chunk of memomy can be found. The low memory reservation needs to
22 be considered if the crashkernel is reserved from the high memory area.
24 - crashkernel=size@offset
25 - crashkernel=size
[all …]
/linux/Documentation/arch/powerpc/
H A Dfirmware-assisted-dump.rst2 Firmware-Assisted Dump
7 The goal of firmware-assisted dump is to enable the dump of
8 a crashed system, and to do so from a fully-reset system, and
9 to minimize the total elapsed time until the system is back
12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace
14 - Fadump uses the same firmware interfaces and memory reservation model
16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore
19 - Unlike phyp dump, userspace tool does not need to refer any sysfs
21 - Unlike phyp dump, FADump allows user to release all the memory reserved
23 - Once enabled through kernel boot parameter, FADump can be
[all …]
/linux/Documentation/core-api/
H A Dmemory-hotplug.rst4 Memory hotplug
7 Memory hotplug event notifier
10 Hotplugging events are sent to a notification queue.
12 There are six types of notification defined in ``include/linux/memory.h``:
15 Generated before new memory becomes available in order to be able to
16 prepare subsystems to handle memory. The page allocator is still unable
17 to allocate from the new memory.
23 Generated when memory has successfully brought online. The callback may
24 allocate pages from the new memory.
27 Generated to begin the process of offlining memory. Allocations are no
[all …]
H A Dmemory-allocation.rst4 Memory Allocation Guide
7 Linux provides a variety of APIs for memory allocation. You can
11 `alloc_pages`. It is also possible to use more specialized allocators,
14 Most of the memory allocation APIs use GFP flags to express how that
15 memory should be allocated. The GFP acronym stands for "get free
16 pages", the underlying memory allocation function.
19 makes the question "How should I allocate memory?" not that easy to
32 The GFP flags control the allocators behavior. They tell what memory
33 zones can be used, how hard the allocator should try to find free
34 memory, whether the memory can be accessed by the userspace etc. The
[all …]
/linux/Documentation/ABI/testing/
H A Dsysfs-devices-memory1 What: /sys/devices/system/memory
5 The /sys/devices/system/memory contains a snapshot of the
6 internal state of the kernel memory blocks. Files could be
7 added or removed dynamically to represent hot-add/remove
9 Users: hotplug memory add/remove tools
10 http://www.ibm.com/developerworks/wikis/display/LinuxP/powerpc-utils
12 What: /sys/devices/system/memory/memoryX/removable
16 The file /sys/devices/system/memory/memoryX/removable is a
17 legacy interface used to indicated whether a memory block is
18 likely to be offlineable or not. Newer kernel versions return
[all …]
/linux/arch/powerpc/kexec/
H A Dranges.c1 // SPDX-License-Identifier: GPL-2.0-only
3 * powerpc code to implement the kexec_file_load syscall
12 * Based on kexec-tools' kexec-ppc64.c, fs2dt.c.
27 #include <asm/crashdump-ppc64.h>
31 * get_max_nr_ranges - Get the max no. of ranges crash_mem structure
39 return ((size - sizeof(struct crash_mem)) / in get_max_nr_ranges()
44 * get_mem_rngs_size - Get the allocated size of mem_rngs based on
46 * @mem_rngs: Memory ranges.
58 (mem_rngs->max_nr_ranges * sizeof(struct range))); in get_mem_rngs_size()
61 * Memory is allocated in size multiple of MEM_RANGE_CHUNK_SZ. in get_mem_rngs_size()
[all …]
/linux/Documentation/admin-guide/mm/damon/
H A Dreclaim.rst1 .. SPDX-License-Identifier: GPL-2.0
4 DAMON-based Reclamation
7 DAMON-based Reclamation (DAMON_RECLAIM) is a static kernel module that aimed to
8 be used for proactive and lightweight reclamation under light memory pressure.
9 It doesn't aim to replace the LRU-list based page_granularity reclamation, but
10 to be selectively used for different level of memory pressure and requirements.
15 On general memory over-committed systems, proactively reclaiming cold pages
16 helps saving memory and reducing latency spikes that incurred by the direct
20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are
22 memory to host, and the host reallocates the reported memory to other guests.
[all …]
/linux/mm/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
3 menu "Memory Management options"
7 # add proper SWAP support to them, in which case this can be remove.
16 bool "Support for paging of anonymous memory (swap)"
20 This option allows you to choose whether you want to have support
22 used to provide more virtual memory than the actual RAM present
32 pages that are in the process of being swapped out and attempts to
33 compress them into a dynamically allocated RAM-based memory pool.
49 bool "Shrink the zswap pool on memory pressure"
55 written back to the backing swap device) on memory pressure.
[all …]
H A Dmemblock.c1 // SPDX-License-Identifier: GPL-2.0-or-later
3 * Procedures for maintaining information about logical memory blocks.
39 * Memblock is a method of managing memory regions during the early
40 * boot period when the usual kernel memory allocators are not up and
43 * Memblock views the system memory as collections of contiguous
46 * * ``memory`` - describes the physical memory available to the
47 * kernel; this may differ from the actual physical memory installed
48 * in the system, for instance when the memory is restricted with
50 * * ``reserved`` - describes the regions that were allocated
51 * * ``physmem`` - describes the actual physical memory available during
[all …]
/linux/Documentation/dev-tools/
H A Dkasan.rst1 .. SPDX-License-Identifier: GPL-2.0
8 --------
10 Kernel Address Sanitizer (KASAN) is a dynamic memory safety error detector
11 designed to find out-of-bounds and use-after-free bugs.
16 2. Software Tag-Based KASAN
17 3. Hardware Tag-Based KASAN
20 debugging, similar to userspace ASan. This mode is supported on many CPU
21 architectures, but it has significant performance and memory overheads.
23 Software Tag-Based KASAN or SW_TAGS KASAN, enabled with CONFIG_KASAN_SW_TAGS,
24 can be used for both debugging and dogfood testing, similar to userspace HWASan.
[all …]
H A Dkmemleak.rst1 Kernel Memory Leak Detector
4 Kmemleak provides a way of detecting possible kernel memory leaks in a
5 way similar to a `tracing garbage collector
9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in
10 user-space applications.
13 -----
15 CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel
16 thread scans the memory every 10 minutes (by default) and prints the
20 # mount -t debugfs nodev /sys/kernel/debug/
22 To display the details of all the possible scanned memory leaks::
[all …]
/linux/drivers/base/
H A Dmemory.c1 // SPDX-License-Identifier: GPL-2.0
3 * Memory subsystem support
8 * This file provides the necessary infrastructure to represent
9 * a SPARSEMEM-memory-model system's physical memory in /sysfs.
10 * All arch-independent code that assumes MEMORY_HOTPLUG requires
19 #include <linux/memory.h>
29 #define MEMORY_CLASS_NAME "memory"
46 return -EINVAL; in mhp_online_type_from_str()
79 * Memory blocks are cached in a local radix tree to avoid
86 * Memory groups, indexed by memory group id (mgid).
[all …]
/linux/Documentation/driver-api/
H A Dntb.rst5 NTB (Non-Transparent Bridge) is a type of PCI-Express bridge chip that connects
6 the separate memory systems of two or more computers to the same PCI-Express
8 registers and memory translation windows, as well as non common features like
9 scratchpad and message registers. Scratchpad registers are read-and-writable
13 special status bits to make sure the information isn't rewritten by another
14 peer. Doorbell registers provide a way for peers to send interrupt events.
15 Memory windows allow translated read and write access to the peer memory.
21 clients interested in NTB features to discover NTB the devices supported by
22 hardware drivers. The term "client" is used here to mean an upper layer
24 is used here to mean a driver for a specific vendor and model of NTB hardware.
[all …]
/linux/drivers/cxl/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
13 CXL.mem). The CXL.cache protocol allows devices to hold cachelines
14 locally, the CXL.mem protocol allows devices to be fully coherent
15 memory targets, the CXL.io protocol is equivalent to PCI Express.
16 Say 'y' to enable support for the configuration and management of
25 The CXL specification defines a "CXL memory device" sub-class in the
26 PCI "memory controller" base class of devices. Device's identified by
28 memory to be mapped into the system address map (Host-managed Device
29 Memory (HDM)).
31 Say 'y/m' to enable a driver that will attach to CXL memory expander
[all …]
/linux/tools/testing/selftests/cgroup/
H A Dtest_memcontrol.c1 /* SPDX-License-Identifier: GPL-2.0 */
29 * the memory controller.
37 /* Create two nested cgroups with the memory controller enabled */ in test_memcg_subtree_control()
46 if (cg_write(parent, "cgroup.subtree_control", "+memory")) in test_memcg_subtree_control()
52 if (cg_read_strstr(child, "cgroup.controllers", "memory")) in test_memcg_subtree_control()
55 /* Create two nested cgroups without enabling memory controller */ in test_memcg_subtree_control()
70 if (!cg_read_strstr(child2, "cgroup.controllers", "memory")) in test_memcg_subtree_control()
98 int ret = -1; in alloc_anon_50M_check()
103 return -1; in alloc_anon_50M_check()
109 current = cg_read_long(cgroup, "memory.current"); in alloc_anon_50M_check()
[all …]
/linux/Documentation/userspace-api/media/v4l/
H A Ddev-mem2mem.rst1 .. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
6 Video Memory-To-Memory Interface
9 A V4L2 memory-to-memory device can compress, decompress, transform, or
10 otherwise convert video data from one format into another format, in memory.
11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or
12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory
14 converting from YUV to RGB).
16 A memory-to-memory video node acts just like a normal video node, but it
17 supports both output (sending frames from memory to the hardware)
19 memory) stream I/O. An application will have to setup the stream I/O for
[all …]
/linux/tools/testing/memblock/tests/
H A Dbasic_api.c1 // SPDX-License-Identifier: GPL-2.0-or-later
17 ASSERT_NE(memblock.memory.regions, NULL); in memblock_initialization_check()
18 ASSERT_EQ(memblock.memory.cnt, 0); in memblock_initialization_check()
19 ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS); in memblock_initialization_check()
20 ASSERT_EQ(strcmp(memblock.memory.name, "memory"), 0); in memblock_initialization_check()
24 ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS); in memblock_initialization_check()
36 * A simple test that adds a memory block of a specified base address
37 * and size to the collection of available memory regions (memblock.memory).
38 * Expect to create a new entry. The region counter and total memory get
45 rgn = &memblock.memory.regions[0]; in memblock_add_simple_check()
[all …]

12345678910>>...44