| /linux/Documentation/mm/ | 
| H A D | memory-model.rst | 1 .. SPDX-License-Identifier: GPL-2.04 Physical Memory Model
 7 Physical memory in a system may be addressed in different ways. The
 8 simplest case is when the physical memory starts at address 0 and
 9 spans a contiguous range up to the maximal address. It could be,
 13 different memory banks are attached to different CPUs.
 15 Linux abstracts this diversity using one of the two memory models:
 17 memory models it supports, what the default memory model is and
 18 whether it is possible to manually override that default.
 20 All the memory models track the status of physical page frames using
 [all …]
 
 | 
| H A D | numa.rst | 12 or more CPUs, local memory, and/or IO buses.  For brevity and to17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset
 18 of the system--although some components necessary for a stand-alone SMP system
 20 connected together with some sort of system interconnect--e.g., a crossbar or
 21 point-to-point link are common types of NUMA system interconnects.  Both of
 22 these types of interconnects can be aggregated to create NUMA platforms with
 26 Coherent NUMA or ccNUMA systems.   With ccNUMA systems, all memory is visible
 27 to and accessible from any CPU attached to any cell and cache coherency
 30 Memory access time and effective memory bandwidth varies depending on how far
 31 away the cell containing the CPU or IO bus making the memory access is from the
 [all …]
 
 | 
| /linux/Documentation/admin-guide/mm/ | 
| H A D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the6 years and included more and more functionality to support a variety of
 7 systems from MMU-less microcontrollers to supercomputers. The memory
 12 address to a physical address.
 16 Virtual Memory Primer
 19 The physical memory in a computer system is a limited resource and
 20 even for systems that support memory hotplug there is a hard limit on
 21 the amount of memory that can be installed. The physical memory is not
 27 All this makes dealing directly with physical memory quite complex and
 28 to avoid this complexity a concept of virtual memory was developed.
 [all …]
 
 | 
| H A D | numaperf.rst | 2 NUMA Memory Performance8 Some platforms may have multiple types of memory attached to a compute
 9 node. These disparate memory ranges may share some characteristics, such
 13 A system supports such heterogeneous memory by grouping each memory type
 15 characteristics.  Some memory may share the same node as a CPU, and others
 16 are provided as memory only nodes. While memory only nodes do not provide
 17 CPUs, they may still be local to one or more compute nodes relative to
 19 nodes with local memory and a memory only node for each of compute node::
 21  +------------------+     +------------------+
 22  | Compute Node 0   +-----+ Compute Node 1   |
 [all …]
 
 | 
| H A D | numa_memory_policy.rst | 2 NUMA Memory Policy5 What is NUMA Memory Policy?
 8 In the Linux kernel, "memory policy" determines from which node the kernel will
 9 allocate memory in a NUMA system or in an emulated NUMA system.  Linux has
 10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
 11 The current memory policy support was added to Linux 2.6 around May 2004.  This
 12 document attempts to describe the concepts and APIs of the 2.6 memory policy
 15 Memory policies should not be confused with cpusets
 16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
 18 memory may be allocated by a set of processes. Memory policies are a
 [all …]
 
 | 
| /linux/drivers/cxl/ | 
| H A D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only14 	  CXL.mem). The CXL.cache protocol allows devices to hold cachelines
 15 	  locally, the CXL.mem protocol allows devices to be fully coherent
 16 	  memory targets, the CXL.io protocol is equivalent to PCI Express.
 17 	  Say 'y' to enable support for the configuration and management of
 26 	  The CXL specification defines a "CXL memory device" sub-class in the
 27 	  PCI "memory controller" base class of devices. Device's identified by
 29 	  memory to be mapped into the system address map (Host-managed Device
 30 	  Memory (HDM)).
 32 	  Say 'y/m' to enable a driver that will attach to CXL memory expander
 [all …]
 
 | 
| /linux/Documentation/arch/arm64/ | 
| H A D | kdump.rst | 2 crashkernel memory reservation on arm647 Kdump mechanism is used to capture a corrupted kernel vmcore so that
 8 it can be subsequently analyzed. In order to do this, a preliminarily
 9 reserved memory is needed to pre-load the kdump kernel and boot such
 12 That reserved memory for kdump is adapted to be able to minimally
 19 Through the kernel parameters below, memory can be reserved accordingly
 21 large chunk of memomy can be found. The low memory reservation needs to
 22 be considered if the crashkernel is reserved from the high memory area.
 24 - crashkernel=size@offset
 25 - crashkernel=size
 [all …]
 
 | 
| /linux/Documentation/edac/ | 
| H A D | memory_repair.rst | 1 .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.2-no-invariants-or-later4 EDAC Memory Repair Control
 7 Copyright (c) 2024-2025 HiSilicon Limited.
 11            Invariant Sections, Front-Cover Texts nor Back-Cover Texts.
 15 - Written for: 6.15
 18 ------------
 20 Some memory devices support repair operations to address issues in their
 21 memory media. Post Package Repair (PPR) and memory sparing are examples of
 27 Post Package Repair is a maintenance operation which requests the memory
 28 device to perform repair operation on its media. It is a memory self-healing
 [all …]
 
 | 
| H A D | scrub.rst | 1 .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.2-no-invariants-or-later7 Copyright (c) 2024-2025 HiSilicon Limited.
 11            Invariant Sections, Front-Cover Texts nor Back-Cover Texts.
 14 - Written for: 6.15
 17 ------------
 19 Increasing DRAM size and cost have made memory subsystem reliability an
 21 could cause expensive or fatal issues. Memory errors are among the top
 24 Memory scrubbing is a feature where an ECC (Error-Correcting Code) engine
 25 reads data from each memory media location, corrects if necessary and writes
 26 the corrected data back to the same memory media location.
 [all …]
 
 | 
| /linux/Documentation/ABI/testing/ | 
| H A D | sysfs-devices-memory | 1 What:		/sys/devices/system/memory5 		The /sys/devices/system/memory contains a snapshot of the
 6 		internal state of the kernel memory blocks. Files could be
 7 		added or removed dynamically to represent hot-add/remove
 9 Users:		hotplug memory add/remove tools
 10 		http://www.ibm.com/developerworks/wikis/display/LinuxP/powerpc-utils
 12 What:		/sys/devices/system/memory/memoryX/removable
 16 		The file /sys/devices/system/memory/memoryX/removable is a
 17 		legacy interface used to indicated whether a memory block is
 18 		likely to be offlineable or not.  Newer kernel versions return
 [all …]
 
 | 
| /linux/mm/ | 
| H A D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only3 menu "Memory Management options"
 7 # add proper SWAP support to them, in which case this can be remove.
 13 	bool "Support for paging of anonymous memory (swap)"
 17 	  This option allows you to choose whether you want to have support
 19 	  used to provide more virtual memory than the actual RAM present
 29 	  pages that are in the process of being swapped out and attempts to
 30 	  compress them into a dynamically allocated RAM-based memory pool.
 46 	bool "Shrink the zswap pool on memory pressure"
 52 	  written back to the backing swap device) on memory pressure.
 [all …]
 
 | 
| /linux/arch/powerpc/kexec/ | 
| H A D | ranges.c | 1 // SPDX-License-Identifier: GPL-2.0-only3  * powerpc code to implement the kexec_file_load syscall
 12  * Based on kexec-tools' kexec-ppc64.c, fs2dt.c.
 27 #include <asm/crashdump-ppc64.h>
 31  * get_max_nr_ranges - Get the max no. of ranges crash_mem structure
 39 	return ((size - sizeof(struct crash_mem)) /  in get_max_nr_ranges()
 44  * get_mem_rngs_size - Get the allocated size of mem_rngs based on
 46  * @mem_rngs:          Memory ranges.
 58 		(mem_rngs->max_nr_ranges * sizeof(struct range)));  in get_mem_rngs_size()
 61 	 * Memory is allocated in size multiple of MEM_RANGE_CHUNK_SZ.  in get_mem_rngs_size()
 [all …]
 
 | 
| /linux/Documentation/admin-guide/mm/damon/ | 
| H A D | reclaim.rst | 1 .. SPDX-License-Identifier: GPL-2.04 DAMON-based Reclamation
 7 DAMON-based Reclamation (DAMON_RECLAIM) is a static kernel module that aimed to
 8 be used for proactive and lightweight reclamation under light memory pressure.
 9 It doesn't aim to replace the LRU-list based page_granularity reclamation, but
 10 to be selectively used for different level of memory pressure and requirements.
 15 On general memory over-committed systems, proactively reclaiming cold pages
 16 helps saving memory and reducing latency spikes that incurred by the direct
 20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are
 22 memory to host, and the host reallocates the reported memory to other guests.
 [all …]
 
 | 
| /linux/drivers/base/ | 
| H A D | memory.c | 1 // SPDX-License-Identifier: GPL-2.03  * Memory subsystem support
 8  * This file provides the necessary infrastructure to represent
 9  * a SPARSEMEM-memory-model system's physical memory in /sysfs.
 10  * All arch-independent code that assumes MEMORY_HOTPLUG requires
 19 #include <linux/memory.h>
 30 #define MEMORY_CLASS_NAME	"memory"
 47 	return -EINVAL;  in mhp_online_type_from_str()
 66  * Memory blocks are cached in a local radix tree to avoid
 73  * Memory groups, indexed by memory group id (mgid).
 [all …]
 
 | 
| /linux/Documentation/driver-api/cxl/linux/ | 
| H A D | memory-hotplug.rst | 1 .. SPDX-License-Identifier: GPL-2.04 Memory Hotplug
 6 The final phase of surfacing CXL memory to the kernel page allocator is for
 7 the `DAX` driver to surface a `Driver Managed` memory region via the
 8 memory-hotplug component.
 10 There are four major configurations to consider:
 13 2) Hotplug Memory Block size
 14 3) Memory Map Resource location
 15 4) Driver-Managed Memory Designation
 19 The default-online behavior of hotplug memory is dictated by the following,
 [all …]
 
 | 
| H A D | early-boot.rst | 1 .. SPDX-License-Identifier: GPL-2.07 Linux configuration is split into two major steps: Early-Boot and everything else.
 10 later operations include things like driver probe and memory hotplug.  Linux may
 11 read EFI and ACPI information throughout this process to configure logical
 23 There are 4 pre-boot options that need to be considered during kernel build
 24 which dictate how memory will be managed by Linux during early boot.
 28   * BIOS/EFI Option that dictates whether memory is SystemRAM or
 29     Specific Purpose.  Specific Purpose memory will be deferred to
 30     drivers to manage - and not immediately exposed as system RAM.
 35     Specific Purpose memory.
 [all …]
 
 | 
| /linux/Documentation/core-api/ | 
| H A D | memory-allocation.rst | 4 Memory Allocation Guide7 Linux provides a variety of APIs for memory allocation. You can
 11 `alloc_pages`. It is also possible to use more specialized allocators,
 14 Most of the memory allocation APIs use GFP flags to express how that
 15 memory should be allocated. The GFP acronym stands for "get free
 16 pages", the underlying memory allocation function.
 19 makes the question "How should I allocate memory?" not that easy to
 32 The GFP flags control the allocators behavior. They tell what memory
 33 zones can be used, how hard the allocator should try to find free
 34 memory, whether the memory can be accessed by the userspace etc. The
 [all …]
 
 | 
| /linux/Documentation/dev-tools/ | 
| H A D | kasan.rst | 1 .. SPDX-License-Identifier: GPL-2.08 --------
 10 Kernel Address Sanitizer (KASAN) is a dynamic memory safety error detector
 11 designed to find out-of-bounds and use-after-free bugs.
 16 2. Software Tag-Based KASAN
 17 3. Hardware Tag-Based KASAN
 20 debugging, similar to userspace ASan. This mode is supported on many CPU
 21 architectures, but it has significant performance and memory overheads.
 23 Software Tag-Based KASAN or SW_TAGS KASAN, enabled with CONFIG_KASAN_SW_TAGS,
 24 can be used for both debugging and dogfood testing, similar to userspace HWASan.
 [all …]
 
 | 
| /linux/Documentation/driver-api/ | 
| H A D | ntb.rst | 5 NTB (Non-Transparent Bridge) is a type of PCI-Express bridge chip that connects6 the separate memory systems of two or more computers to the same PCI-Express
 8 registers and memory translation windows, as well as non common features like
 9 scratchpad and message registers. Scratchpad registers are read-and-writable
 13 special status bits to make sure the information isn't rewritten by another
 14 peer. Doorbell registers provide a way for peers to send interrupt events.
 15 Memory windows allow translated read and write access to the peer memory.
 21 clients interested in NTB features to discover NTB the devices supported by
 22 hardware drivers.  The term "client" is used here to mean an upper layer
 24 is used here to mean a driver for a specific vendor and model of NTB hardware.
 [all …]
 
 | 
| /linux/Documentation/userspace-api/media/v4l/ | 
| H A D | dev-mem2mem.rst | 1 .. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later6 Video Memory-To-Memory Interface
 9 A V4L2 memory-to-memory device can compress, decompress, transform, or
 10 otherwise convert video data from one format into another format, in memory.
 11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or
 12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory
 14 converting from YUV to RGB).
 16 A memory-to-memory video node acts just like a normal video node, but it
 17 supports both output (sending frames from memory to the hardware)
 19 memory) stream I/O. An application will have to setup the stream I/O for
 [all …]
 
 | 
| /linux/tools/testing/selftests/cgroup/ | 
| H A D | test_memcontrol.c | 1 /* SPDX-License-Identifier: GPL-2.0 */52 	return -1; in test_memcg_subtree_control()
 76 		return -1; in test_memcg_subtree_control()
 107  * the memory controller. in alloc_anon_50M_check()
 115 	/* Create two nested cgroups with the memory controller enabled */ in alloc_anon_50M_check()
 124 	if (cg_write(parent, "cgroup.subtree_control", "+memory")) in alloc_anon_50M_check()
 130 	if (cg_read_strstr(child, "cgroup.controllers", "memory")) in alloc_pagecache_50M_check()
 133 	/* Create two nested cgroups without enabling memory controlle in alloc_pagecache_50M_check()
 [all...]
 | 
| /linux/drivers/xen/ | 
| H A D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only6 	bool "Xen memory balloon driver"
 9 	  The balloon driver allows the Xen domain to request more memory from
 10 	  the system to expand the domain's memory allocation, or alternatively
 11 	  return unneeded memory to the system.
 14 	bool "Memory hotplug support for Xen balloon driver"
 18 	  Memory hotplug support for Xen balloon driver allows expanding memory
 23 	  It's also very useful for non PV domains to obtain unpopulated physical
 24 	  memory ranges to use in order to map foreign memory or grants.
 26 	  Memory could be hotplugged in following steps:
 [all …]
 
 | 
| /linux/drivers/gpu/drm/nouveau/nvkm/core/ | 
| H A D | memory.c | 4  * Permission is hereby granted, free of charge, to any person obtaining a6  * to deal in the Software without restriction, including without limitation
 7  * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 8  * and/or sell copies of the Software, and to permit persons to whom the
 9  * Software is furnished to do so, subject to the following conditions:
 15  * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 24 #include <core/memory.h>
 30 nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device,  in nvkm_memory_tags_put()  argument
 33 	struct nvkm_fb *fb = device->fb;  in nvkm_memory_tags_put()
 36 		mutex_lock(&fb->tags.mutex);  in nvkm_memory_tags_put()
 [all …]
 
 | 
| /linux/include/uapi/linux/ | 
| H A D | nitro_enclaves.h | 1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */3  * Copyright 2020-2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
 16  * NE_CREATE_VM - The command is used to create a slot that is associated with
 20  *		  setting any resources, such as memory and vCPUs, for an
 21  *		  enclave. Memory and vCPUs are set for the slot mapped to an enclave.
 22  *		  A NE CPU pool has to be set before calling this function. The
 25  *		  Its format is the detailed in the cpu-lists section:
 26  *		  https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
 27  *		  CPU 0 and its siblings have to remain available for the
 29  *		  CPU core(s), from the same NUMA node, need(s) to be included
 [all …]
 
 | 
| /linux/tools/testing/memblock/tests/ | 
| H A D | basic_api.c | 1 // SPDX-License-Identifier: GPL-2.0-or-later17 	ASSERT_NE(memblock.memory.regions, NULL);  in memblock_initialization_check()
 18 	ASSERT_EQ(memblock.memory.cnt, 0);  in memblock_initialization_check()
 19 	ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS);  in memblock_initialization_check()
 20 	ASSERT_EQ(strcmp(memblock.memory.name, "memory"), 0);  in memblock_initialization_check()
 24 	ASSERT_EQ(memblock.memory.max, EXPECTED_MEMBLOCK_REGIONS);  in memblock_initialization_check()
 36  * A simple test that adds a memory block of a specified base address
 37  * and size to the collection of available memory regions (memblock.memory).
 38  * Expect to create a new entry. The region counter and total memory get
 45 	rgn = &memblock.memory.regions[0];  in memblock_add_simple_check()
 [all …]
 
 |