Home
last modified time | relevance | path

Searched +full:in +full:- +full:memory (Results 1 – 25 of 1054) sorted by relevance

12345678910>>...43

/linux/Documentation/admin-guide/mm/
H A Dmemory-hotplug.rst2 Memory Hot(Un)Plug
5 This document describes generic Linux support for memory hot(un)plug with
13 Memory hot(un)plug allows for increasing and decreasing the size of physical
14 memory available to a machine at runtime. In the simplest case, it consists of
18 Memory hot(un)plug is used for various purposes:
20 - The physical memory available to a machine can be adjusted at runtime, up- or
21 downgrading the memory capacit
[all...]
H A Dconcepts.rst5 The memory management in Linux is a complex system that evolved over the
7 systems from MMU-less microcontrollers to supercomputers. The memory
16 Virtual Memory Primer
19 The physical memory in a computer system is a limited resource and
20 even for systems that support memory hotplug there is a hard limit on
21 the amount of memory that can be installed. The physical memory is not
27 All this makes dealing directly with physical memory quite complex and
28 to avoid this complexity a concept of virtual memory was developed.
30 The virtual memory abstracts the details of physical memory from the
31 application software, allows to keep only needed information in the
[all …]
H A Dnuma_memory_policy.rst2 NUMA Memory Policy
5 What is NUMA Memory Policy?
8 In the Linux kernel, "memory policy" determines from which node the kernel will
9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
11 The current memory policy support was added to Linux 2.6 around May 2004. This
12 document attempts to describe the concepts and APIs of the 2.6 memory policy
15 Memory policies should not be confused with cpusets
16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
18 memory may be allocated by a set of processes. Memory policies are a
[all …]
H A Dnumaperf.rst2 NUMA Memory Performance
8 Some platforms may have multiple types of memory attached to a compute
9 node. These disparate memory ranges may share some characteristics, such
13 A system supports such heterogeneous memory by grouping each memory type
15 characteristics. Some memory may share the same node as a CPU, and others
16 are provided as memory only nodes. While memory only nodes do not provide
19 nodes with local memory and a memory only node for each of compute node::
21 +------------------+ +------------------+
22 | Compute Node 0 +-----+ Compute Node 1 |
24 +--------+---------+ +--------+---------+
[all …]
H A Duserfaultfd.rst8 Userfaults allow the implementation of on-demand paging from userland
10 memory page faults, something otherwise only the kernel code could do.
19 regions of virtual memory with it. Then, any page faults which occur within the
20 region(s) result in a message being delivered to the userfaultfd, notifying
24 memory ranges) provides two primary functionalities:
29 2) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions
30 registered in the ``userfaultfd`` that allows userland to efficiently
32 memory in the background
34 The real advantage of userfaults if compared to regular virtual memory
35 management of mremap/mprotect is that the userfaults in all their
[all …]
/linux/tools/testing/selftests/memory-hotplug/
H A Dmem-on-off-test.sh2 # SPDX-License-Identifier: GPL-2.0
6 # Kselftest framework requirement - SKIP code is 4.
18 SYSFS=`mount -t sysfs | head -1 | awk '{ print $3 }'`
20 if [ ! -d "$SYSFS" ]; then
25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then
26 echo $msg memory hotplug is not supported >&2
30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then
31 echo $msg no hot-pluggable memory >&2
37 # list all hot-pluggable memory
41 local state=${1:-.\*}
[all …]
/linux/Documentation/mm/
H A Dhmm.rst2 Heterogeneous Memory Management (HMM)
5 Provide infrastructure and helpers to integrate non-conventional memory (device
6 memory like GPU on board memory) into regular kernel path, with the cornerstone
7 of this being specialized struct page for such memory (see sections 5 to 7 of
10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
17 This document is divided as follows: in the first section I expose the problems
18 related to using device specific memory allocators. In the second section, I
21 CPU page-table mirroring works and the purpose of HMM in this context. The
22 fifth section deals with how device memory is represented inside the kernel.
28 Problems of using a device specific memory allocator
[all …]
H A Dmemory-model.rst1 .. SPDX-License-Identifier: GPL-2.0
4 Physical Memory Model
7 Physical memory in a system may be addressed in different ways. The
8 simplest case is when the physical memory starts at address 0 and
13 different memory banks are attached to different CPUs.
15 Linux abstracts this diversity using one of the two memory models:
17 memory models it supports, what the default memory model is and
20 All the memory models track the status of physical page frames using
21 struct page arranged in one or more arrays.
23 Regardless of the selected memory model, there exists one-to-one
[all …]
H A Dnuma.rst12 or more CPUs, local memory, and/or IO buses. For brevity and to
15 'cells' in this document.
17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset
18 of the system--although some components necessary for a stand-alone SMP system
20 connected together with some sort of system interconnect--e.g., a crossbar or
21 point-to-point link are common types of NUMA system interconnects. Both of
26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
28 is handled in hardware by the processor caches and/or the system interconnect.
30 Memory access time and effective memory bandwidth varies depending on how far
31 away the cell containing the CPU or IO bus making the memory access is from the
[all …]
/linux/include/uapi/linux/
H A Dnitro_enclaves.h1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
3 * Copyright 2020-2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
16 * NE_CREATE_VM - The command is used to create a slot that is associated with
20 * setting any resources, such as memory and vCPUs, for an
21 * enclave. Memory and vCPUs are set for the slot mapped to an enclave.
25 * Its format is the detailed in the cpu-lists section:
26 * https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
30 * in the CPU pool.
34 * * Enclave file descriptor - Enclave file descriptor used with
35 * ioctl calls to set vCPUs and memory
[all …]
H A Dtee.h2 * Copyright (c) 2015-2016, Linaro Limited
5 * Redistribution and use in source and binary forms, with or without
11 * 2. Redistributions in binary form must reproduce the above copyright notice,
12 * this list of conditions and the following disclaimer in the documentation
18 * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
22 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
24 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
49 #define TEE_GEN_CAP_REG_MEM (1 << 2)/* Supports registering shared memory */
52 #define TEE_MEMREF_NULL (__u64)(-1) /* NULL MemRef Buffer */
62 * OP-TEE specific capabilities
[all …]
H A Dvirtio_mem.h1 /* SPDX-License-Identifier: BSD-3-Clause */
13 * Redistribution and use in source and binary forms, with or without
18 * 2. Redistributions in binary form must reproduce the above copyright
19 * notice, this list of conditions and the following disclaimer in the
27 * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL IBM OR
32 * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
33 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN AN
[all...]
/linux/Documentation/admin-guide/mm/damon/
H A Dreclaim.rst1 .. SPDX-License-Identifier: GPL-2.0
4 DAMON-based Reclamation
7 DAMON-based Reclamation (DAMON_RECLAIM) is a static kernel module that aimed to
8 be used for proactive and lightweight reclamation under light memory pressure.
9 It doesn't aim to replace the LRU-list based page_granularity reclamation, but
10 to be selectively used for different level of memory pressure and requirements.
15 On general memory over-committed systems, proactively reclaiming cold pages
16 helps saving memory and reducing latency spikes that incurred by the direct
20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are
21 good example of the cases. In such systems, the guest VMs reports their free
[all …]
/linux/include/linux/
H A Dmemory.h1 /* SPDX-License-Identifier: GPL-2.0 */
3 * include/linux/memory.h - generic memory definition
6 * basic "struct memory_block" here, which can be embedded in per-arch
9 * Basic handling of the devices is done in drivers/base/memory.c
10 * and system devices are handled in drivers/base/sys.c.
12 * Memory block are exported via sysfs in the class/memory/devices/
26 * struct memory_group - a logical group of memory blocks
27 * @nid: The node id for all memory blocks inside the memory group.
28 * @blocks: List of all memory blocks belonging to this memory group.
29 * @present_kernel_pages: Present (online) memory outside ZONE_MOVABLE of this
[all …]
/linux/Documentation/core-api/
H A Dmemory-hotplug.rst4 Memory hotplug
7 Memory hotplug event notifier
12 There are six types of notification defined in ``include/linux/memory.h``:
15 Generated before new memory becomes available in order to be able to
16 prepare subsystems to handle memory. The page allocator is still unable
17 to allocate from the new memory.
23 Generated when memory has successfully brought online. The callback may
24 allocate pages from the new memory.
27 Generated to begin the process of offlining memory. Allocations are no
28 longer possible from the memory but some of the memory to be offlined
[all …]
/linux/drivers/xen/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
6 bool "Xen memory balloon driver"
9 The balloon driver allows the Xen domain to request more memory from
10 the system to expand the domain's memory allocation, or alternatively
11 return unneeded memory to the system.
14 bool "Memory hotplug support for Xen balloon driver"
18 Memory hotplug support for Xen balloon driver allows expanding memory
24 memory ranges to use in order to map foreign memory or grants.
26 Memory could be hotplugged in following steps:
28 1) target domain: ensure that memory auto online policy is in
[all …]
/linux/Documentation/arch/x86/
H A Dtdx.rst1 .. SPDX-License-Identifier: GPL-2.0
9 encrypting the guest memory. In TDX, a special module running in a special
18 CPU-attested software module called 'the TDX module' runs inside the new
22 TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
23 provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs
32 TDX boot-time detection
33 -----------------------
41 ---------------------------------------
48 special error. In this case the kernel fails the module initialization
54 use it as 'metadata' for the TDX memory. It also takes additional CPU
[all …]
/linux/Documentation/admin-guide/sysctl/
H A Dvm.rst11 For general info and legal blurb, please look in index.rst.
13 ------------------------------------------------------------------------------
15 This file contains the documentation for the sysctl files in
18 The files in this directory can be used to tune the operation
19 of the virtual memory (VM) subsystem of the Linux kernel and
23 files can be found in mm/swap.c.
25 Currently, these files are in /proc/sys/vm:
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
[all …]
/linux/Documentation/admin-guide/
H A Dcgroup-v2.rst1 .. _cgroup-v2:
11 conventions of cgroup v2. It describes all userland-visible aspects
13 future changes must be reflected in this document. Documentation for
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
19 1-1. Terminology
20 1-2. What is cgroup?
22 2-1. Mounting
23 2-2. Organizing Processes and Threads
24 2-2-1. Processes
25 2-2-2. Threads
[all …]
/linux/Documentation/arch/powerpc/
H A Dfirmware-assisted-dump.rst2 Firmware-Assisted Dump
7 The goal of firmware-assisted dump is to enable the dump of
8 a crashed system, and to do so from a fully-reset system, and
10 in production use.
12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace
14 - Fadump uses the same firmware interfaces and memory reservation model
16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore
17 in the ELF format in the same way as kdump. This helps us reuse the
19 - Unlike phyp dump, userspace tool does not need to refer any sysfs
21 - Unlike phyp dump, FADump allows user to release all the memory reserved
[all …]
/linux/drivers/cxl/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
15 memory targets, the CXL.io protocol is equivalent to PCI Express.
25 The CXL specification defines a "CXL memory device" sub-class in the
26 PCI "memory controller" base class of devices. Device's identified by
28 memory to be mapped into the system address map (Host-managed Device
29 Memory (HDM)).
31 Say 'y/m' to enable a driver that will attach to CXL memory expander
32 devices enumerated by the memory device class code for configuration
34 Type 3 CXL Device in the CXL 2.0 specification for more details.
39 bool "RAW Command Interface for Memory Devices"
[all …]
/linux/Documentation/security/
H A Dself-protection.rst2 Kernel Self-Protection
5 Kernel self-protection is the design and implementation of systems and
6 structures within the Linux kernel to protect against security flaws in
9 and actively detecting attack attempts. Not all topics are explored in
13 In the worst-case scenario, we assume an unprivileged local attacker
14 has arbitrary read and write access to the kernel's memory. In many
16 but with systems in place that defend against the worst case we'll
18 still be kept in mind, is protecting the kernel against a _privileged_
23 The goals for successful self-protection systems would be that they
24 are effective, on by default, require no opt-in by developers, have no
[all …]
/linux/Documentation/virt/hyperv/
H A Dcoco.rst1 .. SPDX-License-Identifier: GPL-2.0
5 Hyper-V can create and run Linux guests that are Confidential Computing
7 the confidentiality and integrity of data in the VM's memory, even in the
9 CoCo VMs on Hyper-V share the generic CoCo VM threat model and security
10 objectives described in Documentation/security/snp-tdx-threat-model.rst. Note
11 that Hyper-V specific code in Linux refers to CoCo VMs as "isolated VMs" or
14 A Linux CoCo VM on Hyper-V requires the cooperation and interaction of the
19 * The hardware runs a version of Windows/Hyper-V with support for CoCo VMs
25 * AMD processor with SEV-SNP. Hyper-V does not run guest VMs with AMD SME,
26 SEV, or SEV-ES encryption, and such encryption is not sufficient for a CoCo
[all …]
/linux/Documentation/driver-api/pci/
H A Dp2pdma.rst1 .. SPDX-License-Identifier: GPL-2.0
4 PCI Peer-to-Peer DMA Support
9 called Peer-to-Peer (or P2P). However, there are a number of issues that
10 make P2P transactions tricky to do in a perfectly safe way.
13 transactions between hierarchy domains, and in PCIe, each Root Port
18 same PCI bridge, as such devices are all in the same PCI hierarchy
23 The second issue is that to make use of existing interfaces in Linux,
24 memory that is used for P2P transactions needs to be backed by struct
33 In a given P2P implementation there may be three or more different
34 types of kernel drivers in play:
[all …]
/linux/Documentation/ABI/stable/
H A Dsysfs-devices-node3 Contact: Linux Memory Management list <linux-mm@kvack.org>
9 Contact: Linux Memory Management list <linux-mm@kvack.org>
15 Contact: Linux Memory Management list <linux-mm@kvack.org>
17 Nodes that have regular memory.
21 Contact: Linux Memory Management list <linux-mm@kvack.org>
27 Contact: Linux Memory Management list <linux-mm@kvack.org>
29 Nodes that have regular or high memory.
34 Contact: Linux Memory Management list <linux-mm@kvack.org>
42 Contact: Linux Memory Management list <linux-mm@kvack.org>
48 Contact: Linux Memory Management list <linux-mm@kvack.org>
[all …]

12345678910>>...43