| /linux/tools/testing/selftests/memory-hotplug/ |
| H A D | mem-on-off-test.sh | 25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then 26 echo $msg memory hotplug is not supported >&2 30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then 31 echo $msg no hot-pluggable memory >&2 37 # list all hot-pluggable memory 43 for memory in $SYSFS/devices/system/memory/memory*; do 44 if grep -q 1 $memory/removable && 45 grep -q $state $memory/state; then 46 echo ${memory##/*/memory} 63 grep -q online $SYSFS/devices/system/memory/memory$1/state [all …]
|
| /linux/Documentation/admin-guide/mm/ |
| H A D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the 7 systems from MMU-less microcontrollers to supercomputers. The memory 16 Virtual Memory Primer 19 The physical memory in a computer system is a limited resource and 20 even for systems that support memory hotplug there is a hard limit on 21 the amount of memory that can be installed. The physical memory is not 27 All this makes dealing directly with physical memory quite complex and 28 to avoid this complexity a concept of virtual memory was developed. 30 The virtual memory abstracts the details of physical memory from the 32 physical memory (demand paging) and provides a mechanism for the [all …]
|
| H A D | numaperf.rst | 2 NUMA Memory Performance 8 Some platforms may have multiple types of memory attached to a compute 9 node. These disparate memory ranges may share some characteristics, such 13 A system supports such heterogeneous memory by grouping each memory type 15 characteristics. Some memory may share the same node as a CPU, and others 16 are provided as memory only nodes. While memory only nodes do not provide 19 nodes with local memory and a memory only node for each of compute node:: 30 A "memory initiator" is a node containing one or more devices such as 31 CPUs or separate memory I/O devices that can initiate memory requests. 32 A "memory target" is a node containing one or more physical address [all …]
|
| /linux/Documentation/ABI/testing/ |
| H A D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 9 Users: hotplug memory add/remove tools 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable is a 17 legacy interface used to indicated whether a memory block is 19 "1" if and only if the kernel supports memory offlining. 20 Users: hotplug memory remove tools 24 What: /sys/devices/system/memory/memoryX/phys_device [all …]
|
| /linux/Documentation/edac/ |
| H A D | memory_repair.rst | 4 EDAC Memory Repair Control 20 Some memory devices support repair operations to address issues in their 21 memory media. Post Package Repair (PPR) and memory sparing are examples of 27 Post Package Repair is a maintenance operation which requests the memory 28 device to perform repair operation on its media. It is a memory self-healing 29 feature that fixes a failing memory location by replacing it with a spare row 32 For example, a CXL memory device with DRAM components that support PPR 42 The data may not be retained and memory requests may not be correctly 46 For example, for CXL memory devices, see CXL spec rev 3.1 [1]_ sections 50 Memory Sparing [all …]
|
| H A D | scrub.rst | 19 Increasing DRAM size and cost have made memory subsystem reliability an 21 could cause expensive or fatal issues. Memory errors are among the top 24 Memory scrubbing is a feature where an ECC (Error-Correcting Code) engine 25 reads data from each memory media location, corrects if necessary and writes 26 the corrected data back to the same memory media location. 28 DIMMs can be scrubbed at a configurable rate to detect uncorrected memory 35 2. When detected, uncorrected errors caught in unallocated memory pages are 39 memory errors. 41 4. The additional data on failures in memory may be used to build up 42 statistics that are later used to decide whether to use memory repair [all …]
|
| /linux/drivers/cxl/ |
| H A D | Kconfig | 16 memory targets, the CXL.io protocol is equivalent to PCI Express. 27 The CXL specification defines a "CXL memory device" sub-class in the 28 PCI "memory controller" base class of devices. Device's identified by 30 memory to be mapped into the system address map (Host-managed Device 31 Memory (HDM)). 33 Say 'y/m' to enable a driver that will attach to CXL memory expander 34 devices enumerated by the memory device class code for configuration 41 bool "RAW Command Interface for Memory Devices" 54 potential impact to memory currently in use by the kernel. 68 Enable support for host managed device memory (HDM) resources [all …]
|
| /linux/Documentation/arch/arm64/ |
| H A D | kdump.rst | 2 crashkernel memory reservation on arm64 9 reserved memory is needed to pre-load the kdump kernel and boot such 12 That reserved memory for kdump is adapted to be able to minimally 19 Through the kernel parameters below, memory can be reserved accordingly 21 large chunk of memomy can be found. The low memory reservation needs to 22 be considered if the crashkernel is reserved from the high memory area. 28 Low memory and high memory 31 For kdump reservations, low memory is the memory area under a specific 34 vmcore dumping can be ignored. On arm64, the low memory upper bound is 37 whole system RAM is low memory. Outside of the low memory described [all …]
|
| /linux/tools/testing/selftests/cgroup/ |
| H A D | test_memcontrol.c | 110 * the memory controller. in test_memcg_subtree_control() 118 /* Create two nested cgroups with the memory controller enabled */ in test_memcg_subtree_control() 127 if (cg_write(parent, "cgroup.subtree_control", "+memory")) in test_memcg_subtree_control() 133 if (cg_read_strstr(child, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 136 /* Create two nested cgroups without enabling memory controller */ in test_memcg_subtree_control() 151 if (!cg_read_strstr(child2, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 190 current = cg_read_long(cgroup, "memory.current"); in alloc_anon_50M_check() 197 anon = cg_read_key_long(cgroup, "memory.stat", "anon "); in alloc_anon_50M_check() 224 current = cg_read_long(cgroup, "memory.current"); in alloc_pagecache_50M_check() 228 file = cg_read_key_long(cgroup, "memory in alloc_pagecache_50M_check() [all...] |
| H A D | test_zswap.c | 57 return cg_read_key_long(cg, "memory.stat", "zswpwb"); 62 return cg_read_key_long(cgroup, "memory.stat", "zswpout "); 76 /* Go through the allocated memory to (z)swap in and out pages */ in allocate_and_read_bytes() 107 if (cg_write(group_name, "memory.max", "1M")) { in setup_test_group_1M() 131 if (cg_write(test_group, "memory.max", "1M")) in test_zswap_usage() 140 /* Allocate more than memory.max to push memory into zswap */ in test_zswap_usage() 159 * Check that when memory.zswap.max = 0, no pages can go to the zswap pool for 173 if (cg_write(test_group, "memory.max", "8M")) in test_swapin_nozswap() 175 if (cg_write(test_group, "memory in test_swapin_nozswap() [all...] |
| /linux/Documentation/driver-api/cxl/linux/ |
| H A D | memory-hotplug.rst | 4 Memory Hotplug 6 The final phase of surfacing CXL memory to the kernel page allocator is for 7 the `DAX` driver to surface a `Driver Managed` memory region via the 8 memory-hotplug component. 13 2) Hotplug Memory Block size 14 3) Memory Map Resource location 15 4) Driver-Managed Memory Designation 19 The default-online behavior of hotplug memory is dictated by the following, 24 - :code:`/sys/devices/system/memory/auto_online_blocks` value 26 These dictate whether hotplugged memory blocks arrive in one of three states: [all …]
|
| /linux/arch/powerpc/kexec/ |
| H A D | ranges.c | 46 * @mem_rngs: Memory ranges. 61 * Memory is allocated in size multiple of MEM_RANGE_CHUNK_SZ. in get_mem_rngs_size() 68 * __add_mem_range - add a memory range to memory ranges list. 69 * @mem_ranges: Range list to add the memory range to. 71 * @size: Size of the memory range to add. 73 * (Re)allocates memory, if needed. 89 pr_debug("Added memory range [%#016llx - %#016llx] at index %d\n", in __add_mem_range() 96 * __merge_memory_ranges - Merges the given memory ranges list. 140 * sort_memory_ranges - Sorts the given memory ranges list. 161 pr_debug("Memory ranges:\n"); in sort_memory_ranges() [all …]
|
| /linux/drivers/gpu/drm/nouveau/nvkm/core/ |
| H A D | memory.c | 24 #include <core/memory.h> 30 nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_put() argument 39 kfree(memory->tags); in nvkm_memory_tags_put() 40 memory->tags = NULL; in nvkm_memory_tags_put() 48 nvkm_memory_tags_get(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_get() argument 56 if ((tags = memory->tags)) { in nvkm_memory_tags_get() 57 /* If comptags exist for the memory, but a different amount in nvkm_memory_tags_get() 84 * As memory can be mapped in multiple places, we still in nvkm_memory_tags_get() 94 *ptags = memory->tags = tags; in nvkm_memory_tags_get() 101 struct nvkm_memory *memory) in nvkm_memory_ctor() argument [all …]
|
| /linux/drivers/memory/tegra/ |
| H A D | Kconfig | 3 bool "NVIDIA Tegra Memory Controller support" 8 This driver supports the Memory Controller (MC) hardware found on 14 tristate "NVIDIA Tegra20 External Memory Controller driver" 21 This driver is for the External Memory Controller (EMC) found on 23 This driver is required to change memory timings / clock rate for 24 external memory. 27 tristate "NVIDIA Tegra30 External Memory Controller driver" 33 This driver is for the External Memory Controller (EMC) found on 35 This driver is required to change memory timings / clock rate for 36 external memory. [all …]
|
| /linux/Documentation/core-api/ |
| H A D | memory-allocation.rst | 4 Memory Allocation Guide 7 Linux provides a variety of APIs for memory allocation. You can 14 Most of the memory allocation APIs use GFP flags to express how that 15 memory should be allocated. The GFP acronym stands for "get free 16 pages", the underlying memory allocation function. 19 makes the question "How should I allocate memory?" not that easy to 32 The GFP flags control the allocators behavior. They tell what memory 34 memory, whether the memory can be accessed by the userspace etc. The 39 * Most of the time ``GFP_KERNEL`` is what you need. Memory for the 40 kernel data structures, DMAable memory, inode cache, all these and [all …]
|
| /linux/tools/testing/memblock/tests/ |
| H A D | basic_api.c | 17 ASSERT_NE(memblock.memory.regions, NULL); in memblock_initialization_check() 18 ASSERT_EQ(memblock.memory.cnt, 0); in memblock_initialization_check() 19 ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS); in memblock_initialization_check() 20 ASSERT_EQ(strcmp(memblock.memory.name, "memory"), 0); in memblock_initialization_check() 24 ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS); in memblock_initialization_check() 36 * A simple test that adds a memory block of a specified base address 37 * and size to the collection of available memory regions (memblock.memory). 38 * Expect to create a new entry. The region counter and total memory get 45 rgn = &memblock.memory.regions[0]; in memblock_add_simple_check() 60 ASSERT_EQ(memblock.memory.cnt, 1); in memblock_add_simple_check() [all …]
|
| /linux/Documentation/userspace-api/media/v4l/ |
| H A D | dev-mem2mem.rst | 7 Video Memory-To-Memory Interface 10 A V4L2 memory-to-memory device can compress, decompress, transform, or 11 otherwise convert video data from one format into another format, in memory. 12 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or 13 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory 17 A memory-to-memory video node acts just like a normal video node, but it 18 supports both output (sending frames from memory to the hardware) 20 memory) stream I/O. An application will have to setup the stream I/O for 24 Memory-to-memory devices function as a shared resource: you can 33 One of the most common memory-to-memory device is the codec. Codecs [all …]
|
| /linux/arch/arm64/boot/dts/renesas/ |
| H A D | r8a78000-ironhide.dts | 23 memory@60600000 { 24 device_type = "memory"; 29 memory@1080000000 { 30 device_type = "memory"; 34 memory@1200000000 { 35 device_type = "memory"; 39 memory@1400000000 { 40 device_type = "memory"; 44 memory@1600000000 { 45 device_type = "memory"; [all …]
|
| /linux/include/linux/ |
| H A D | execmem.h | 18 * enum execmem_type - types of executable memory ranges 20 * There are several subsystems that allocate executable memory. 22 * permissions, alignment and other parameters for memory that can be used 24 * Types in this enum identify subsystems that allocate executable memory 48 * enum execmem_range_flags - options for executable memory allocations 59 * execmem_fill_trapping_insns - set memory to contain instructions that 61 * @ptr: pointer to memory to fill 93 * @pgprot: permissions for memory in this address space 95 * @flags: options for memory allocations for this range 110 * parameters for executable memory allocations. The ranges that are not [all …]
|
| /linux/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/ |
| H A D | mem.c | 22 #define nvkm_mem(p) container_of((p), struct nvkm_mem, memory) 25 #include <core/memory.h> 31 struct nvkm_memory memory; member 43 nvkm_mem_target(struct nvkm_memory *memory) in nvkm_mem_target() argument 45 return nvkm_mem(memory)->target; in nvkm_mem_target() 49 nvkm_mem_page(struct nvkm_memory *memory) in nvkm_mem_page() argument 55 nvkm_mem_addr(struct nvkm_memory *memory) in nvkm_mem_addr() argument 57 struct nvkm_mem *mem = nvkm_mem(memory); in nvkm_mem_addr() 64 nvkm_mem_size(struct nvkm_memory *memory) in nvkm_mem_size() argument 66 return nvkm_mem(memory)->pages << PAGE_SHIFT; in nvkm_mem_size() [all …]
|
| /linux/include/uapi/linux/ |
| H A D | nitro_enclaves.h | 20 * setting any resources, such as memory and vCPUs, for an 21 * enclave. Memory and vCPUs are set for the slot mapped to an enclave. 35 * ioctl calls to set vCPUs and memory 40 * * ENOMEM - Memory allocation failure for internal 67 * * ENOMEM - Memory allocation failure for internal 87 * in-memory enclave image loading e.g. offset in 88 * enclave memory to start placing the enclave image. 91 * and returns the offset in enclave memory where to 109 * NE_SET_USER_MEMORY_REGION - The command is used to set a memory region for an 110 * enclave, given the allocated memory from the [all …]
|
| /linux/Documentation/driver-api/ |
| H A D | ntb.rst | 6 the separate memory systems of two or more computers to the same PCI-Express 8 registers and memory translation windows, as well as non common features like 15 Memory windows allow translated read and write access to the peer memory. 38 Primary purpose of NTB is to share some piece of memory between at least two 40 mainly used to perform the proper memory window initialization. Typically 41 there are two types of memory window interfaces supported by the NTB API: 48 Memory: Local NTB Port: Peer NTB Port: Peer MMIO: 51 | memory | _v____________ | ______________ 52 | (addr) |<======| MW xlat addr |<====| MW base addr |<== memory-mapped IO 55 So typical scenario of the first type memory window initialization looks: [all …]
|
| H A D | edac.rst | 16 * Memory devices 18 The individual DRAM chips on a memory stick. These devices commonly 20 provides the number of bits that the memory controller expects: 23 * Memory Stick 25 A printed circuit board that aggregates multiple memory devices in 28 called DIMM (Dual Inline Memory Module). 30 * Memory Socket 32 A physical connector on the motherboard that accepts a single memory 37 A memory controller channel, responsible to communicate with a group of 43 It is typically the highest hierarchy on a Fully-Buffered DIMM memory [all …]
|
| /linux/Documentation/devicetree/bindings/soc/fsl/ |
| H A D | fsl,qman-fqd.yaml | 7 title: QMan Private Memory Nodes 13 QMan requires two contiguous range of physical memory used for the backing store 15 This memory is reserved/allocated as a node under the /reserved-memory node. 17 BMan requires a contiguous range of physical memory used for the backing store 18 for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as 19 a node under the /reserved-memory node. 21 The QMan FQD memory node must be named "qman-fqd" 22 The QMan PFDR memory node must be named "qman-pfdr" 23 The BMan FBPR memory node must be named "bman-fbpr" 25 The following constraints are relevant to the FQD and PFDR private memory: [all …]
|
| /linux/drivers/xen/ |
| H A D | Kconfig | 6 bool "Xen memory balloon driver" 9 The balloon driver allows the Xen domain to request more memory from 10 the system to expand the domain's memory allocation, or alternatively 11 return unneeded memory to the system. 14 bool "Memory hotplug support for Xen balloon driver" 18 Memory hotplug support for Xen balloon driver allows expanding memory 24 memory ranges to use in order to map foreign memory or grants. 26 Memory could be hotplugged in following steps: 28 1) target domain: ensure that memory auto online policy is in 29 effect by checking /sys/devices/system/memory/auto_online_blocks [all …]
|