Home
last modified time | relevance | path

Searched full:memory (Results 1 – 25 of 7014) sorted by relevance

12345678910>>...281

/linux/tools/testing/selftests/memory-hotplug/
H A Dmem-on-off-test.sh25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then
26 echo $msg memory hotplug is not supported >&2
30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then
31 echo $msg no hot-pluggable memory >&2
37 # list all hot-pluggable memory
43 for memory in $SYSFS/devices/system/memory/memory*; do
44 if grep -q 1 $memory/removable &&
45 grep -q $state $memory/state; then
46 echo ${memory##/*/memory}
63 grep -q online $SYSFS/devices/system/memory/memory$1/state
[all …]
/linux/Documentation/admin-guide/mm/
H A Dconcepts.rst5 The memory management in Linux is a complex system that evolved over the
7 systems from MMU-less microcontrollers to supercomputers. The memory
16 Virtual Memory Primer
19 The physical memory in a computer system is a limited resource and
20 even for systems that support memory hotplug there is a hard limit on
21 the amount of memory that can be installed. The physical memory is not
27 All this makes dealing directly with physical memory quite complex and
28 to avoid this complexity a concept of virtual memory was developed.
30 The virtual memory abstracts the details of physical memory from the
32 physical memory (demand paging) and provides a mechanism for the
[all …]
H A Dnumaperf.rst2 NUMA Memory Performance
8 Some platforms may have multiple types of memory attached to a compute
9 node. These disparate memory ranges may share some characteristics, such
13 A system supports such heterogeneous memory by grouping each memory type
15 characteristics. Some memory may share the same node as a CPU, and others
16 are provided as memory only nodes. While memory only nodes do not provide
19 nodes with local memory and a memory only node for each of compute node::
30 A "memory initiator" is a node containing one or more devices such as
31 CPUs or separate memory I/O devices that can initiate memory requests.
32 A "memory target" is a node containing one or more physical address
[all …]
H A Dnuma_memory_policy.rst2 NUMA Memory Policy
5 What is NUMA Memory Policy?
8 In the Linux kernel, "memory policy" determines from which node the kernel will
9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
11 The current memory policy support was added to Linux 2.6 around May 2004. This
12 document attempts to describe the concepts and APIs of the 2.6 memory policy
15 Memory policies should not be confused with cpusets
18 memory may be allocated by a set of processes. Memory policies are a
21 takes priority. See :ref:`Memory Policies and cpusets <mem_pol_and_cpusets>`
[all …]
/linux/Documentation/mm/
H A Dmemory-model.rst4 Physical Memory Model
7 Physical memory in a system may be addressed in different ways. The
8 simplest case is when the physical memory starts at address 0 and
13 different memory banks are attached to different CPUs.
15 Linux abstracts this diversity using one of the two memory models:
17 memory models it supports, what the default memory model is and
20 All the memory models track the status of physical page frames using
23 Regardless of the selected memory model, there exists one-to-one
27 Each memory model defines :c:func:`pfn_to_page` and :c:func:`page_to_pfn`
34 The simplest memory model is FLATMEM. This model is suitable for
[all …]
H A Dnuma.rst12 or more CPUs, local memory, and/or IO buses. For brevity and to
26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
30 Memory access time and effective memory bandwidth varies depending on how far
31 away the cell containing the CPU or IO bus making the memory access is from the
32 cell containing the target memory. For example, access to memory by CPUs
34 bandwidths than accesses to memory on other, remote cells. NUMA platforms
39 memory bandwidth. However, to achieve scalable memory bandwidth, system and
40 application software must arrange for a large majority of the memory references
41 [cache misses] to be to "local" memory--memory on the same cell, if any--or
42 to the closest cell with memory.
[all …]
/linux/Documentation/ABI/testing/
H A Dsysfs-devices-memory1 What: /sys/devices/system/memory
5 The /sys/devices/system/memory contains a snapshot of the
6 internal state of the kernel memory blocks. Files could be
9 Users: hotplug memory add/remove tools
12 What: /sys/devices/system/memory/memoryX/removable
16 The file /sys/devices/system/memory/memoryX/removable is a
17 legacy interface used to indicated whether a memory block is
19 "1" if and only if the kernel supports memory offlining.
20 Users: hotplug memory remove tools
24 What: /sys/devices/system/memory/memoryX/phys_device
[all …]
/linux/Documentation/edac/
H A Dmemory_repair.rst4 EDAC Memory Repair Control
20 Some memory devices support repair operations to address issues in their
21 memory media. Post Package Repair (PPR) and memory sparing are examples of
27 Post Package Repair is a maintenance operation which requests the memory
28 device to perform repair operation on its media. It is a memory self-healing
29 feature that fixes a failing memory location by replacing it with a spare row
32 For example, a CXL memory device with DRAM components that support PPR
42 The data may not be retained and memory requests may not be correctly
46 For example, for CXL memory devices, see CXL spec rev 3.1 [1]_ sections
50 Memory Sparing
[all …]
H A Dscrub.rst19 Increasing DRAM size and cost have made memory subsystem reliability an
21 could cause expensive or fatal issues. Memory errors are among the top
24 Memory scrubbing is a feature where an ECC (Error-Correcting Code) engine
25 reads data from each memory media location, corrects if necessary and writes
26 the corrected data back to the same memory media location.
28 DIMMs can be scrubbed at a configurable rate to detect uncorrected memory
35 2. When detected, uncorrected errors caught in unallocated memory pages are
39 memory errors.
41 4. The additional data on failures in memory may be used to build up
42 statistics that are later used to decide whether to use memory repair
[all …]
/linux/drivers/cxl/
H A DKconfig16 memory targets, the CXL.io protocol is equivalent to PCI Express.
26 The CXL specification defines a "CXL memory device" sub-class in the
27 PCI "memory controller" base class of devices. Device's identified by
29 memory to be mapped into the system address map (Host-managed Device
30 Memory (HDM)).
32 Say 'y/m' to enable a driver that will attach to CXL memory expander
33 devices enumerated by the memory device class code for configuration
40 bool "RAW Command Interface for Memory Devices"
53 potential impact to memory currently in use by the kernel.
66 Enable support for host managed device memory (HDM) resources
[all …]
/linux/tools/testing/selftests/cgroup/
H A Dtest_memcontrol.c107 * the memory controller. in alloc_anon_50M_check()
115 /* Create two nested cgroups with the memory controller enabled */ in alloc_anon_50M_check()
124 if (cg_write(parent, "cgroup.subtree_control", "+memory")) in alloc_anon_50M_check()
130 if (cg_read_strstr(child, "cgroup.controllers", "memory")) in alloc_pagecache_50M_check()
133 /* Create two nested cgroups without enabling memory controller */ in alloc_pagecache_50M_check()
148 if (!cg_read_strstr(child2, "cgroup.controllers", "memory")) in alloc_pagecache_50M_check()
187 current = cg_read_long(cgroup, "memory.current"); in test_memcg_current_peak()
194 anon = cg_read_key_long(cgroup, "memory.stat", "anon "); in test_memcg_current_peak()
221 current = cg_read_long(cgroup, "memory.current"); in test_memcg_current_peak()
225 file = cg_read_key_long(cgroup, "memory in test_memcg_current_peak()
[all...]
/linux/Documentation/arch/arm64/
H A Dkdump.rst2 crashkernel memory reservation on arm64
9 reserved memory is needed to pre-load the kdump kernel and boot such
12 That reserved memory for kdump is adapted to be able to minimally
19 Through the kernel parameters below, memory can be reserved accordingly
21 large chunk of memomy can be found. The low memory reservation needs to
22 be considered if the crashkernel is reserved from the high memory area.
28 Low memory and high memory
31 For kdump reservations, low memory is the memory area under a specific
34 vmcore dumping can be ignored. On arm64, the low memory upper bound is
37 whole system RAM is low memory. Outside of the low memory described
[all …]
/linux/Documentation/driver-api/cxl/platform/
H A Dbios-and-efi.rst19 * BIOS/EFI create the system memory map (EFI Memory Map, E820, etc)
24 static memory map configuration. More detail on these tables can be found
29 on physical memory region size and alignment, memory holes, HDM interleave,
39 When this is enabled, this bit tells linux to defer management of a memory
40 region to a driver (in this case, the CXL driver). Otherwise, the memory is
41 treated as "normal memory", and is exposed to the page allocator during
60 Memory Attribute` field. This may be called something else on your platform.
62 :code:`uefisettings get "CXL Memory Attribute"` ::
67 name: "CXL Memory Attribute",
72 Physical Memory Map
[all …]
/linux/Documentation/driver-api/cxl/linux/
H A Dmemory-hotplug.rst4 Memory Hotplug
6 The final phase of surfacing CXL memory to the kernel page allocator is for
7 the `DAX` driver to surface a `Driver Managed` memory region via the
8 memory-hotplug component.
13 2) Hotplug Memory Block size
14 3) Memory Map Resource location
15 4) Driver-Managed Memory Designation
19 The default-online behavior of hotplug memory is dictated by the following,
24 - :code:`/sys/devices/system/memory/auto_online_blocks` value
26 These dictate whether hotplugged memory blocks arrive in one of three states:
[all …]
/linux/mm/
H A DKconfig3 menu "Memory Management options"
16 bool "Support for paging of anonymous memory (swap)"
22 used to provide more virtual memory than the actual RAM present
33 compress them into a dynamically allocated RAM-based memory pool.
49 bool "Shrink the zswap pool on memory pressure"
55 written back to the backing swap device) on memory pressure.
60 and consume memory indefinitely.
160 zsmalloc is a slab-based memory allocator designed to store
203 bool "Configure for minimal memory footprint"
207 Configures the slab allocator in a way to achieve minimal memory
[all …]
/linux/arch/powerpc/kexec/
H A Dranges.c46 * @mem_rngs: Memory ranges.
61 * Memory is allocated in size multiple of MEM_RANGE_CHUNK_SZ. in get_mem_rngs_size()
68 * __add_mem_range - add a memory range to memory ranges list.
69 * @mem_ranges: Range list to add the memory range to.
71 * @size: Size of the memory range to add.
73 * (Re)allocates memory, if needed.
89 pr_debug("Added memory range [%#016llx - %#016llx] at index %d\n", in __add_mem_range()
96 * __merge_memory_ranges - Merges the given memory ranges list.
140 * sort_memory_ranges - Sorts the given memory ranges list.
161 pr_debug("Memory ranges:\n"); in sort_memory_ranges()
[all …]
/linux/Documentation/admin-guide/mm/damon/
H A Dreclaim.rst8 be used for proactive and lightweight reclamation under light memory pressure.
10 to be selectively used for different level of memory pressure and requirements.
15 On general memory over-committed systems, proactively reclaiming cold pages
16 helps saving memory and reducing latency spikes that incurred by the direct
20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are
22 memory to host, and the host reallocates the reported memory to other guests.
23 As a result, the memory of the systems are fully utilized. However, the
24 guests could be not so memory-frugal, mainly because some kernel subsystems and
25 user-space applications are designed to use as much memory as available. Then,
26 guests could report only small amount of memory as free to host, results in
[all …]
/linux/drivers/gpu/drm/nouveau/nvkm/core/
H A Dmemory.c24 #include <core/memory.h>
30 nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_put() argument
39 kfree(memory->tags); in nvkm_memory_tags_put()
40 memory->tags = NULL; in nvkm_memory_tags_put()
48 nvkm_memory_tags_get(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_get() argument
56 if ((tags = memory->tags)) { in nvkm_memory_tags_get()
57 /* If comptags exist for the memory, but a different amount in nvkm_memory_tags_get()
84 * As memory can be mapped in multiple places, we still in nvkm_memory_tags_get()
94 *ptags = memory->tags = tags; in nvkm_memory_tags_get()
101 struct nvkm_memory *memory) in nvkm_memory_ctor() argument
[all …]
/linux/Documentation/driver-api/cxl/devices/
H A Ddevice-types.rst7 The type of CXL device (Memory, Accelerator, etc) dictates many configuration steps. This section
23 other than memory (CXL.mem) or cache (CXL.cache) operations.
31 The mechanism by which a device may coherently access and cache host memory.
37 The mechanism by which the CPU may coherently access and cache device memory.
53 * Does NOT have host-managed device memory (HDM)
56 directly operate on host-memory (DMA) to store incoming packets. These
57 devices largely rely on CPU-attached memory.
65 * Optionally implements coherent cache and Host-Managed Device Memory
66 * Is typically an accelerator device with high bandwidth memory.
69 of host-managed device memory, which allows the device to operate on a
[all …]
/linux/drivers/memory/tegra/
H A DKconfig3 bool "NVIDIA Tegra Memory Controller support"
8 This driver supports the Memory Controller (MC) hardware found on
14 tristate "NVIDIA Tegra20 External Memory Controller driver"
21 This driver is for the External Memory Controller (EMC) found on
23 This driver is required to change memory timings / clock rate for
24 external memory.
27 tristate "NVIDIA Tegra30 External Memory Controller driver"
33 This driver is for the External Memory Controller (EMC) found on
35 This driver is required to change memory timings / clock rate for
36 external memory.
[all …]
/linux/Documentation/core-api/
H A Dmemory-allocation.rst4 Memory Allocation Guide
7 Linux provides a variety of APIs for memory allocation. You can
14 Most of the memory allocation APIs use GFP flags to express how that
15 memory should be allocated. The GFP acronym stands for "get free
16 pages", the underlying memory allocation function.
19 makes the question "How should I allocate memory?" not that easy to
32 The GFP flags control the allocators behavior. They tell what memory
34 memory, whether the memory can be accessed by the userspace etc. The
39 * Most of the time ``GFP_KERNEL`` is what you need. Memory for the
40 kernel data structures, DMAable memory, inode cache, all these and
[all …]
/linux/Documentation/userspace-api/media/v4l/
H A Ddev-mem2mem.rst6 Video Memory-To-Memory Interface
9 A V4L2 memory-to-memory device can compress, decompress, transform, or
10 otherwise convert video data from one format into another format, in memory.
11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or
12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory
16 A memory-to-memory video node acts just like a normal video node, but it
17 supports both output (sending frames from memory to the hardware)
19 memory) stream I/O. An application will have to setup the stream I/O for
23 Memory-to-memory devices function as a shared resource: you can
32 One of the most common memory-to-memory device is the codec. Codecs
[all …]
/linux/tools/testing/memblock/tests/
H A Dbasic_api.c17 ASSERT_NE(memblock.memory.regions, NULL); in memblock_initialization_check()
18 ASSERT_EQ(memblock.memory.cnt, 0); in memblock_initialization_check()
19 ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS); in memblock_initialization_check()
20 ASSERT_EQ(strcmp(memblock.memory.name, "memory"), 0); in memblock_initialization_check()
24 ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS); in memblock_initialization_check()
36 * A simple test that adds a memory block of a specified base address
37 * and size to the collection of available memory regions (memblock.memory).
38 * Expect to create a new entry. The region counter and total memory get
45 rgn = &memblock.memory.regions[0]; in memblock_add_simple_check()
60 ASSERT_EQ(memblock.memory.cnt, 1); in memblock_add_simple_check()
[all …]
/linux/include/linux/
H A Dexecmem.h18 * enum execmem_type - types of executable memory ranges
20 * There are several subsystems that allocate executable memory.
22 * permissions, alignment and other parameters for memory that can be used
24 * Types in this enum identify subsystems that allocate executable memory
48 * enum execmem_range_flags - options for executable memory allocations
59 * execmem_fill_trapping_insns - set memory to contain instructions that
61 * @ptr: pointer to memory to fill
93 * @pgprot: permissions for memory in this address space
95 * @flags: options for memory allocations for this range
110 * parameters for executable memory allocations. The ranges that are not
[all …]
/linux/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/
H A Dmem.c22 #define nvkm_mem(p) container_of((p), struct nvkm_mem, memory)
25 #include <core/memory.h>
31 struct nvkm_memory memory; member
43 nvkm_mem_target(struct nvkm_memory *memory) in nvkm_mem_target() argument
45 return nvkm_mem(memory)->target; in nvkm_mem_target()
49 nvkm_mem_page(struct nvkm_memory *memory) in nvkm_mem_page() argument
55 nvkm_mem_addr(struct nvkm_memory *memory) in nvkm_mem_addr() argument
57 struct nvkm_mem *mem = nvkm_mem(memory); in nvkm_mem_addr()
64 nvkm_mem_size(struct nvkm_memory *memory) in nvkm_mem_size() argument
66 return nvkm_mem(memory)->pages << PAGE_SHIFT; in nvkm_mem_size()
[all …]

12345678910>>...281