/linux/Documentation/admin-guide/mm/ |
H A D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the 7 systems from MMU-less microcontrollers to supercomputers. The memory 16 Virtual Memory Primer 19 The physical memory in a computer system is a limited resource and 20 even for systems that support memory hotplug there is a hard limit on 21 the amount of memory that can be installed. The physical memory is not 27 All this makes dealing directly with physical memory quite complex and 28 to avoid this complexity a concept of virtual memory was developed. 30 The virtual memory abstracts the details of physical memory from the 31 application software, allows to keep only needed information in the [all …]
|
H A D | numa_memory_policy.rst | 2 NUMA Memory Policy 5 What is NUMA Memory Policy? 8 In the Linux kernel, "memory policy" determines from which node the kernel will 9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has 10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?. 11 The current memory policy support was added to Linux 2.6 around May 2004. This 12 document attempts to describe the concepts and APIs of the 2.6 memory policy 15 Memory policies should not be confused with cpusets 16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``) 18 memory may be allocated by a set of processes. Memory policies are a [all …]
|
H A D | numaperf.rst | 2 NUMA Memory Performance 8 Some platforms may have multiple types of memory attached to a compute 9 node. These disparate memory ranges may share some characteristics, such 13 A system supports such heterogeneous memory by grouping each memory type 15 characteristics. Some memory may share the same node as a CPU, and others 16 are provided as memory only nodes. While memory only nodes do not provide 19 nodes with local memory and a memory only node for each of compute node:: 21 +------------------+ +------------------+ 22 | Compute Node 0 +-----+ Compute Node 1 | 24 +--------+---------+ +--------+---------+ [all …]
|
H A D | userfaultfd.rst | 8 Userfaults allow the implementation of on-demand paging from userland 10 memory page faults, something otherwise only the kernel code could do. 19 regions of virtual memory with it. Then, any page faults which occur within the 20 region(s) result in a message being delivered to the userfaultfd, notifying 24 memory ranges) provides two primary functionalities: 29 2) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions 30 registered in the ``userfaultfd`` that allows userland to efficiently 32 memory in the background 34 The real advantage of userfaults if compared to regular virtual memory 35 management of mremap/mprotect is that the userfaults in all their [all …]
|
/linux/tools/testing/selftests/memory-hotplug/ |
H A D | mem-on-off-test.sh | 2 # SPDX-License-Identifier: GPL-2.0 6 # Kselftest framework requirement - SKIP code is 4. 18 SYSFS=`mount -t sysfs | head -1 | awk '{ print $3 }'` 20 if [ ! -d "$SYSFS" ]; then 25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then 26 echo $msg memory hotplug is not supported >&2 30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then 31 echo $msg no hot-pluggable memory >&2 37 # list all hot-pluggable memory 41 local state=${1:-.\*} [all …]
|
/linux/Documentation/edac/ |
H A D | memory_repair.rst | 1 .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.2-no-invariants-or-later 4 EDAC Memory Repair Control 7 Copyright (c) 2024-2025 HiSilicon Limited. 11 Invariant Sections, Front-Cover Texts nor Back-Cover Texts. 15 - Written for: 6.15 18 ------------ 20 Some memory devices support repair operations to address issues in their 21 memory media. Post Package Repair (PPR) and memory sparing are examples of 27 Post Package Repair is a maintenance operation which requests the memory 28 device to perform repair operation on its media. It is a memory self-healing [all …]
|
H A D | scrub.rst | 1 .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.2-no-invariants-or-later 7 Copyright (c) 2024-2025 HiSilicon Limited. 11 Invariant Sections, Front-Cover Texts nor Back-Cover Texts. 14 - Written for: 6.15 17 ------------ 19 Increasing DRAM size and cost have made memory subsystem reliability an 21 could cause expensive or fatal issues. Memory errors are among the top 24 Memory scrubbing is a feature where an ECC (Error-Correcting Code) engine 25 reads data from each memory media location, corrects if necessary and writes 26 the corrected data back to the same memory media location. [all …]
|
/linux/include/uapi/linux/ |
H A D | nitro_enclaves.h | 1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 3 * Copyright 2020-2021 Amazon.com, Inc. or its affiliates. All Rights Reserved. 16 * NE_CREATE_VM - The command is used to create a slot that is associated with 20 * setting any resources, such as memory and vCPUs, for an 21 * enclave. Memory and vCPUs are set for the slot mapped to an enclave. 25 * Its format is the detailed in the cpu-lists section: 26 * https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html 30 * in the CPU pool. 34 * * Enclave file descriptor - Enclave file descriptor used with 35 * ioctl calls to set vCPUs and memory [all …]
|
H A D | tee.h | 2 * Copyright (c) 2015-2016, Linaro Limited 5 * Redistribution and use in source and binary forms, with or without 11 * 2. Redistributions in binary form must reproduce the above copyright notice, 12 * this list of conditions and the following disclaimer in the documentation 18 * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE 22 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 24 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 49 #define TEE_GEN_CAP_REG_MEM (1 << 2)/* Supports registering shared memory */ 52 #define TEE_MEMREF_NULL (__u64)(-1) /* NULL MemRef Buffer */ 62 * OP-TEE specific capabilities [all …]
|
/linux/Documentation/mm/ |
H A D | memory-model.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Physical Memory Model 7 Physical memory in a system may be addressed in different ways. The 8 simplest case is when the physical memory starts at address 0 and 13 different memory banks are attached to different CPUs. 15 Linux abstracts this diversity using one of the two memory models: 17 memory models it supports, what the default memory model is and 20 All the memory models track the status of physical page frames using 21 struct page arranged in one or more arrays. 23 Regardless of the selected memory model, there exists one-to-one [all …]
|
H A D | numa.rst | 12 or more CPUs, local memory, and/or IO buses. For brevity and to 15 'cells' in this document. 17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 18 of the system--although some components necessary for a stand-alone SMP system 20 connected together with some sort of system interconnect--e.g., a crossbar or 21 point-to-point link are common types of NUMA system interconnects. Both of 26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 28 is handled in hardware by the processor caches and/or the system interconnect. 30 Memory access time and effective memory bandwidth varies depending on how far 31 away the cell containing the CPU or IO bus making the memory access is from the [all …]
|
/linux/Documentation/admin-guide/mm/damon/ |
H A D | reclaim.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 DAMON-based Reclamation 7 DAMON-based Reclamation (DAMON_RECLAIM) is a static kernel module that aimed to 8 be used for proactive and lightweight reclamation under light memory pressure. 9 It doesn't aim to replace the LRU-list based page_granularity reclamation, but 10 to be selectively used for different level of memory pressure and requirements. 15 On general memory over-committed systems, proactively reclaiming cold pages 16 helps saving memory and reducing latency spikes that incurred by the direct 20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are 21 good example of the cases. In such systems, the guest VMs reports their free [all …]
|
/linux/mm/ |
H A D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only 3 menu "Memory Management options" 7 # add proper SWAP support to them, in which case this can be remove. 16 bool "Support for paging of anonymous memory (swap)" 21 for so called swap devices or swap files in your kernel that are 22 used to provide more virtual memory than the actual RAM present 23 in your computer. If unsure say Y. 32 pages that are in the process of being swapped out and attempts to 33 compress them into a dynamically allocated RAM-based memory pool. 34 This can result in a significant I/O reduction on swap device and, [all …]
|
/linux/Documentation/admin-guide/ |
H A D | cgroup-v2.rst | 1 .. _cgroup-v2: 11 conventions of cgroup v2. It describes all userland-visible aspects 13 future changes must be reflected in this document. Documentation for 14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 19 1-1. Terminology 20 1-2. What is cgroup? 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads [all …]
|
/linux/drivers/xen/ |
H A D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only 6 bool "Xen memory balloon driver" 9 The balloon driver allows the Xen domain to request more memory from 10 the system to expand the domain's memory allocation, or alternatively 11 return unneeded memory to the system. 14 bool "Memory hotplug support for Xen balloon driver" 18 Memory hotplug support for Xen balloon driver allows expanding memory 24 memory ranges to use in order to map foreign memory or grants. 26 Memory could be hotplugged in following steps: 28 1) target domain: ensure that memory auto online policy is in [all …]
|
/linux/Documentation/arch/x86/ |
H A D | tdx.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 encrypting the guest memory. In TDX, a special module running in a special 18 CPU-attested software module called 'the TDX module' runs inside the new 22 TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to 23 provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs 32 TDX boot-time detection 33 ----------------------- 41 --------------------------------------- 48 special error. In this case the kernel fails the module initialization 54 use it as 'metadata' for the TDX memory. It also takes additional CPU [all …]
|
/linux/Documentation/admin-guide/cgroup-v1/ |
H A D | cpusets.rst | 11 - Portions Copyright (c) 2004-2006 Silicon Graphics, Inc. 12 - Modified by Paul Jackson <pj@sgi.com> 13 - Modified by Christoph Lameter <cl@gentwo.org> 14 - Modified by Paul Menage <menage@google.com> 15 - Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> 25 1.6 What is memory spread ? 41 ---------------------- 43 Cpusets provide a mechanism for assigning a set of CPUs and Memory 44 Nodes to a set of tasks. In this document "Memory Node" refers to 45 an on-line node that contains memory. [all …]
|
/linux/Documentation/admin-guide/sysctl/ |
H A D | vm.rst | 11 For general info and legal blurb, please look in index.rst. 13 ------------------------------------------------------------------------------ 15 This file contains the documentation for the sysctl files in 18 The files in this directory can be used to tune the operation 19 of the virtual memory (VM) subsystem of the Linux kernel and 23 files can be found in mm/swap.c. 25 Currently, these files are in /proc/sys/vm: 27 - admin_reserve_kbytes 28 - compact_memory 29 - compaction_proactiveness [all …]
|
/linux/Documentation/driver-api/cxl/platform/ |
H A D | bios-and-efi.rst | 1 .. SPDX-License-Identifier: GPL-2.0 19 * BIOS/EFI create the system memory map (EFI Memory Map, E820, etc) 24 static memory map configuration. More detail on these tables can be found 29 on physical memory region size and alignment, memory holes, HDM interleave, 39 When this is enabled, this bit tells linux to defer management of a memory 40 region to a driver (in this case, the CXL driver). Otherwise, the memory is 41 treated as "normal memory", and is exposed to the page allocator during 45 --------------------- 60 Memory Attribute` field. This may be called something else on your platform. 62 :code:`uefisettings get "CXL Memory Attribute"` :: [all …]
|
/linux/drivers/cxl/ |
H A D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only 16 memory targets, the CXL.io protocol is equivalent to PCI Express. 26 The CXL specification defines a "CXL memory device" sub-class in the 27 PCI "memory controller" base class of devices. Device's identified by 29 memory to be mapped into the system address map (Host-managed Device 30 Memory (HDM)). 32 Say 'y/m' to enable a driver that will attach to CXL memory expander 33 devices enumerated by the memory device class code for configuration 35 Type 3 CXL Device in the CXL 2.0 specification for more details. 40 bool "RAW Command Interface for Memory Devices" [all …]
|
/linux/Documentation/driver-api/cxl/linux/ |
H A D | early-boot.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Linux configuration is split into two major steps: Early-Boot and everything else. 10 later operations include things like driver probe and memory hotplug. Linux may 14 During Linux Early Boot stage (functions in the kernel that have the __init 23 There are 4 pre-boot options that need to be considered during kernel build 24 which dictate how memory will be managed by Linux during early boot. 28 * BIOS/EFI Option that dictates whether memory is SystemRAM or 29 Specific Purpose. Specific Purpose memory will be deferred to 30 drivers to manage - and not immediately exposed as system RAM. 35 Specific Purpose memory. [all …]
|
H A D | cxl-driver.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 The devices described in this section are present in :: 12 The :code:`cxl-cli` library, maintained as part of the NDTCL project, may 19 * cxl_core - fundamental init interface and core object creation 20 * cxl_port - initializes root and provides port enumeration interface. 21 * cxl_acpi - initializes root decoders and interacts with ACPI data. 22 * cxl_p/mem - initializes memory devices 23 * cxl_pci - uses cxl_port to enumerate the actual fabric hierarchy. 27 Here is an example from a single-socket system with 4 host bridges. Two host 28 bridges have a single memory device attached, and the devices are interleaved [all …]
|
/linux/Documentation/virt/hyperv/ |
H A D | coco.rst | 1 .. SPDX-License-Identifier: GPL-2.0 5 Hyper-V can create and run Linux guests that are Confidential Computing 7 the confidentiality and integrity of data in the VM's memory, even in the 9 CoCo VMs on Hyper-V share the generic CoCo VM threat model and security 10 objectives described in Documentation/security/snp-tdx-threat-model.rst. Note 11 that Hyper-V specific code in Linux refers to CoCo VMs as "isolated VMs" or 14 A Linux CoCo VM on Hyper-V requires the cooperation and interaction of the 19 * The hardware runs a version of Windows/Hyper-V with support for CoCo VMs 25 * AMD processor with SEV-SNP. Hyper-V does not run guest VMs with AMD SME, 26 SEV, or SEV-ES encryption, and such encryption is not sufficient for a CoCo [all …]
|
/linux/Documentation/security/ |
H A D | self-protection.rst | 2 Kernel Self-Protection 5 Kernel self-protection is the design and implementation of systems and 6 structures within the Linux kernel to protect against security flaws in 9 and actively detecting attack attempts. Not all topics are explored in 13 In the worst-case scenario, we assume an unprivileged local attacker 14 has arbitrary read and write access to the kernel's memory. In many 16 but with systems in place that defend against the worst case we'll 18 still be kept in mind, is protecting the kernel against a _privileged_ 23 The goals for successful self-protection systems would be that they 24 are effective, on by default, require no opt-in by developers, have no [all …]
|
/linux/Documentation/driver-api/pci/ |
H A D | p2pdma.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 PCI Peer-to-Peer DMA Support 9 called Peer-to-Peer (or P2P). However, there are a number of issues that 10 make P2P transactions tricky to do in a perfectly safe way. 13 transactions between hierarchy domains, and in PCIe, each Root Port 18 same PCI bridge, as such devices are all in the same PCI hierarchy 23 The second issue is that to make use of existing interfaces in Linux, 24 memory that is used for P2P transactions needs to be backed by struct 33 In a given P2P implementation there may be three or more different 34 types of kernel drivers in play: [all …]
|