Home
last modified time | relevance | path

Searched +full:non +full:- +full:contiguous (Results 1 – 25 of 491) sorted by relevance

12345678910>>...20

/linux/tools/testing/selftests/resctrl/
H A Dcat_test.c1 // SPDX-License-Identifier: GPL-2.0
19 * test with n bits is MIN_DIFF_PERCENT_PER_BIT * (n - 1). With e.g. 5 vs 4
21 * MIN_DIFF_PERCENT_PER_BIT * (4 - 1) = 3 percent.
44 float delta = (__s64)(avg_llc_val - *prev_avg_llc_val); in show_results_info()
81 fp = fopen(param->filename, "r"); in check_results()
85 return -1; in check_results()
115 MIN_DIFF_PERCENT_PER_BIT * (bits - 1), in check_results()
137 * cat_test - Execute CAT benchmark and measure cache misses
169 if (strcmp(param->filename, "") == 0) in cat_test()
170 sprintf(param->filename, "stdio"); in cat_test()
[all …]
/linux/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/
H A Dsve.json4 …ding the Advanced SIMD scalar instructions and the instructions listed in Non-SIMD SVE instruction…
8 …ecturally executed SVE instructions, including the instructions listed in Non-SIMD SVE instruction…
12 …ecturally executed SVE instructions, including the instructions listed in Non-SIMD SVE instruction…
20 "BriefDescription": "This event counts all architecturally executed micro-operation."
28 …ations due to scalar, Advanced SIMD, and SVE instructions listed in Floating-point instructions se…
32 …on": "This event counts architecturally executed floating-point fused multiply-add and multiply-su…
36 …"BriefDescription": "This event counts architecturally executed floating-point reciprocal estimate…
40 …uted floating-point convert operations due to the scalar, Advanced SIMD, and SVE floating-point co…
60 …"BriefDescription": "This event counts architecturally executed SVE 64-bit integer divide operatio…
76 …"BriefDescription": "This event counts architecturally executed SVE integer 64-bit x 64-bit multip…
[all …]
/linux/arch/arm64/mm/
H A Dcontpte.c1 // SPDX-License-Identifier: GPL-2.0-only
41 unsigned long last_addr = addr + PAGE_SIZE * (nr - 1); in contpte_try_unfold_partial()
42 pte_t *last_ptep = ptep + nr - 1; in contpte_try_unfold_partial()
74 * NOTE: Instead of using N=16 as the contiguous block length, we use in contpte_convert()
77 * NOTE: 'n' and 'c' are used to denote the "contiguous bit" being in contpte_convert()
80 * We worry about two cases where contiguous bit is used: in contpte_convert()
81 * - When folding N smaller non-contiguous ptes as 1 contiguous block. in contpte_convert()
82 * - When unfolding a contiguous block into N smaller non-contiguous ptes. in contpte_convert()
86 * 0) Initial page-table layout: in contpte_convert()
88 * +----+----+----+----+ in contpte_convert()
[all …]
H A Dhugetlbpage.c1 // SPDX-License-Identifier: GPL-2.0-only
24 * ---------------------------------------------------
26 * ---------------------------------------------------
30 * ---------------------------------------------------
44 order = PUD_SHIFT - PAGE_SHIFT; in arm64_hugetlb_cma_reserve()
46 order = CONT_PMD_SHIFT - PAGE_SHIFT; in arm64_hugetlb_cma_reserve()
146 * Changing some bits of contiguous entries requires us to follow a
147 * Break-Before-Make approach, breaking the whole contiguous set
149 * "Misprogramming of the Contiguous bit", page D4-1762.
164 while (--ncontig) { in get_clear_contig()
[all …]
/linux/kernel/dma/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
99 pools as needed. To reduce run-time kernel memory requirements, you
123 <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
128 # Should be selected if we can mmap non-coherent mappings to userspace.
161 bool "DMA Contiguous Memory Allocator"
164 This enables the Contiguous Memory Allocator which allows drivers
165 to allocate big physically-contiguous blocks of memory for use with
166 hardware components that do not support I/O map nor scatter-gather.
171 For more information see <kernel/dma/contiguous.c>.
177 bool "Enable separate DMA Contiguous Memory Area for NUMA Node"
[all …]
H A Dremap.c1 // SPDX-License-Identifier: GPL-2.0
5 #include <linux/dma-map-ops.h>
13 if (!area || !(area->flags & VM_DMA_COHERENT)) in dma_common_find_pages()
15 WARN(area->flags != VM_DMA_COHERENT, in dma_common_find_pages()
17 return area->pages; in dma_common_find_pages()
22 * Cannot be used in non-sleeping contexts
32 find_vm_area(vaddr)->pages = pages; in dma_common_pages_remap()
37 * Remaps an allocated contiguous region into another vm_area.
38 * Cannot be used in non-sleeping contexts
66 if (!area || !(area->flags & VM_DMA_COHERENT)) { in dma_common_free_remap()
/linux/Documentation/driver-api/dmaengine/
H A Dprovider.rst20 DMA-eligible devices to the controller itself. Whenever the device
44 transfer into smaller sub-transfers.
47 that involve a single contiguous block of data. However, some of the
49 non-contiguous buffers to a contiguous buffer, which is called
50 scatter-gather.
53 scatter-gather. So we're left with two cases here: either we have a
56 that implements in hardware scatter-gather.
79 These were just the general memory-to-memory (also called mem2mem) or
80 memory-to-device (mem2dev) kind of transfers. Most devices often
98 documentation file in Documentation/crypto/async-tx-api.rst.
[all …]
/linux/Documentation/mm/
H A Dmemory-model.rst1 .. SPDX-License-Identifier: GPL-2.0
9 spans a contiguous range up to the maximal address. It could be,
11 for the CPU. Then there could be several contiguous ranges at
23 Regardless of the selected memory model, there exists one-to-one
35 non-NUMA systems with contiguous, or mostly contiguous, physical
54 straightforward: `PFN - ARCH_PFN_OFFSET` is an index to the
65 as hot-plug and hot-remove of the physical memory, alternative memory
66 maps for non-volatile memory devices and deferred initialization of
85 NR\_MEM\_SECTIONS = 2 ^ {(MAX\_PHYSMEM\_BITS - SECTION\_SIZE\_BITS)}
87 The `mem_section` objects are arranged in a two-dimensional array
[all …]
/linux/fs/resctrl/
H A Dctrlmondata.c1 // SPDX-License-Identifier: GPL-2.0-only
4 * - Cache Allocation code.
49 if (!r->membw.delay_linear && r->membw.arch_needs_linear) { in bw_validate()
50 rdt_last_cmd_puts("No support for non-linear MB domains\n"); in bw_validate()
66 if (bw < r->membw.min_bw || bw > r->membw.max_bw) { in bw_validate()
68 bw, r->membw.min_bw, r->membw.max_bw); in bw_validate()
72 *data = roundup(bw, (unsigned long)r->membw.bw_gran); in bw_validate()
80 u32 closid = data->rdtgrp->closid; in parse_bw()
81 struct rdt_resource *r = s->res; in parse_bw()
84 cfg = &d->staged_config[s->conf_type]; in parse_bw()
[all …]
/linux/arch/nios2/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0
51 int "Order of maximal physically contiguous allocations"
55 contiguous allocations. The limit is called MAX_PAGE_ORDER and it
57 allocated as a single contiguous block. This option allows
59 large blocks of physically contiguous memory is required.
82 2 or 4. Any non-aligned load/store instructions will be trapped and
99 some command-line options at build time by entering them here. In
120 bool "Passed kernel command line from u-boot"
122 Use bootargs env variable from u-boot for kernel command line.
/linux/drivers/gpu/drm/exynos/
H A Dexynos_drm_gem.c1 // SPDX-License-Identifier: GPL-2.0-or-later
9 #include <linux/dma-buf.h>
26 struct drm_device *dev = exynos_gem->base.dev; in exynos_drm_alloc_buf()
29 if (exynos_gem->dma_addr) { in exynos_drm_alloc_buf()
35 * if EXYNOS_BO_CONTIG, fully physically contiguous memory in exynos_drm_alloc_buf()
36 * region will be allocated else physically contiguous in exynos_drm_alloc_buf()
39 if (!(exynos_gem->flags & EXYNOS_BO_NONCONTIG)) in exynos_drm_alloc_buf()
46 if (exynos_gem->flags & EXYNOS_BO_WC || in exynos_drm_alloc_buf()
47 !(exynos_gem->flags & EXYNOS_BO_CACHABLE)) in exynos_drm_alloc_buf()
54 exynos_gem->dma_attrs = attr; in exynos_drm_alloc_buf()
[all …]
/linux/Documentation/admin-guide/mm/
H A Dnommu-mmap.rst2 No-MMU memory mapping support
5 The kernel has limited support for memory mapping under no-MMU conditions, such
16 The behaviour is similar between the MMU and no-MMU cases, but not identical;
21 In the MMU case: VM regions backed by arbitrary pages; copy-on-write
24 In the no-MMU case: VM regions backed by arbitrary contiguous runs of
31 the no-MMU case doesn't support these, behaviour is identical to
39 In the no-MMU case:
41 - If one exists, the kernel will re-use an existing mapping to the
45 - If possible, the file mapping will be directly on the backing device
50 - If the backing device can't or won't permit direct sharing,
[all …]
H A Dpagemap.rst12 physical frame each virtual page is mapped to. It contains one 64-bit
16 * Bits 0-54 page frame number (PFN) if present
17 * Bits 0-4 swap type if swapped
18 * Bits 5-54 swap offset if swapped
19 * Bit 55 pte is soft-dirty (see
20 Documentation/admin-guide/mm/soft-dirty.rst)
22 * Bit 57 pte is uffd-wp write-protected (since 5.13) (see
23 Documentation/admin-guide/mm/userfaultfd.rst)
25 * Bits 59-60 zero
26 * Bit 61 page is file-page or shared-anon (since 3.5)
[all …]
/linux/arch/sh/mm/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0
12 Some SH processors (such as SH-2/SH-2A) lack an MMU. In order to
15 On other systems (such as the SH-3 and 4) where an MMU exists,
26 On MMU-less systems, any of these page sizes can be selected
34 int "Order of maximal physically contiguous allocations"
41 contiguous allocations. The limit is called MAX_PAGE:_ORDER and it
43 allocated as a single contiguous block. This option allows
45 large blocks of physically contiguous memory is required.
89 bool "Support 32-bit physical addressing through PMB"
95 32-bits through the SH-4A PMB. If this is not set, legacy
[all …]
/linux/drivers/gpu/drm/xe/
H A Dxe_bo_doc.h1 /* SPDX-License-Identifier: MIT */
25 * ----------
32 * vmap (XE can access the memory via xe_map layer) and have contiguous physical
35 * More details of why kernel BOs are pinned and contiguous below.
38 * --------
53 * the BO dma-resv slots / lock point to the VM's dma-resv slots / lock (all
54 * private BOs to a VM share common dma-resv slots / lock).
62 * own unique dma-resv slots / lock. An external BO will be in an array of all
90 * ----------------
109 * dma-resv slots.
[all …]
/linux/drivers/s390/cio/
H A Ditcw.c1 // SPDX-License-Identifier: GPL-2.0
21 * struct itcw - incremental tcw helper data type
24 * tcw and associated tccb, tsb, data tidaw-list plus an optional interrogate
26 * contiguous buffer provided by the user.
29 * - reset unused fields to zero
30 * - fill in required pointers
31 * - ensure required alignment for data structures
32 * - prevent data structures to cross 4k-byte boundary where required
33 * - calculate tccb-related length fields
34 * - optionally provide ready-made interrogate tcw and associated structures
[all …]
/linux/tools/testing/selftests/net/packetdrill/
H A Dtcp_dsack_mult.pkt1 // SPDX-License-Identifier: GPL-2.0
4 --mss=1000
23 // Check SACK coalescing (contiguous sequence).
27 // Check we have two SACK ranges for non contiguous sequences.
/linux/Documentation/devicetree/bindings/arm/
H A Darm,coresight-catu.yaml1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
3 ---
4 $id: http://devicetree.org/schemas/arm/arm,coresight-catu.yaml#
5 $schema: http://devicetree.org/meta-schemas/core.yaml#
10 - Mathieu Poirier <mathieu.poirier@linaro.org>
11 - Mike Leach <mike.leach@linaro.org>
12 - Leo Yan <leo.yan@linaro.org>
13 - Suzuki K Poulose <suzuki.poulose@arm.com>
26 translates contiguous Virtual Addresses (VAs) from an AXI master into
27 non-contiguous Physical Addresses (PAs) that are intended for system memory.
[all …]
/linux/drivers/vfio/pci/
H A Dvfio_pci_igd.c1 // SPDX-License-Identifier: GPL-2.0-only
8 * Register a device specific region through which to provide read-only
34 * igd_opregion_shift_copy() - Copy OpRegion to user buffer and shift position.
44 * Return: 0 on success, -EFAULT otherwise.
55 return -EFAULT; in igd_opregion_shift_copy()
59 *remaining -= bytes; in igd_opregion_shift_copy()
68 unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS; in vfio_pci_igd_rw()
69 struct igd_opregion_vbt *opregionvbt = vdev->region[i].data; in vfio_pci_igd_rw()
73 if (pos >= vdev->region[i].size || iswrite) in vfio_pci_igd_rw()
74 return -EINVAL; in vfio_pci_igd_rw()
[all …]
/linux/tools/perf/pmu-events/arch/arm64/
H A Dcommon-and-microarch.json129 "PublicDescription": "Attributable Level 1 data cache write-back",
132 "BriefDescription": "Attributable Level 1 data cache write-back"
147 "PublicDescription": "Attributable Level 2 data cache write-back",
150 "BriefDescription": "Attributable Level 2 data cache write-back"
283 "PublicDescription": "Access to another socket in a multi-socket system",
286 "BriefDescription": "Access to another socket in a multi-socket system"
323 … "PublicDescription": "Attributable memory read access to another socket in a multi-socket system",
326 … "BriefDescription": "Attributable memory read access to another socket in a multi-socket system"
329 …"PublicDescription": "Level 1 data cache long-latency read miss. The counter counts each memory r…
332 "BriefDescription": "Level 1 data cache long-latency read miss"
[all …]
/linux/tools/testing/selftests/bpf/progs/
H A Ddynptr_success.c1 // SPDX-License-Identifier: GPL-2.0
133 sample->pid += index; in ringbuf_callback()
158 sample->pid = 10; in test_ringbuf()
163 if (sample->pid != 55) in test_ringbuf()
185 if (ret != -EINVAL) { in test_skb_readonly()
243 if (bpf_dynptr_size(&ptr) != bytes - off) { in test_adjust()
256 if (bpf_dynptr_size(&ptr) != trim - off) { in test_adjust()
283 if (bpf_dynptr_adjust(&ptr, 5, 1) != -EINVAL) { in test_adjust_err()
289 if (bpf_dynptr_adjust(&ptr, size + 1, size + 1) != -ERANGE) { in test_adjust_err()
295 if (bpf_dynptr_adjust(&ptr, 0, size + 1) != -ERANGE) { in test_adjust_err()
[all …]
/linux/drivers/net/ethernet/intel/ice/
H A Dice_dcb_lib.c1 // SPDX-License-Identifier: GPL-2.0
9 * ice_dcb_get_ena_tc - return bitmap of enabled TCs
43 if (vsi->tc_cfg.ena_tc & BIT(i)) in ice_is_pfc_causing_hung_q()
47 for (tc = 0; tc < num_tcs - 1; tc++) in ice_is_pfc_causing_hung_q()
48 if (ice_find_q_in_range(vsi->tc_cfg.tc_info[tc].qoffset, in ice_is_pfc_causing_hung_q()
49 vsi->tc_cfg.tc_info[tc + 1].qoffset, in ice_is_pfc_causing_hung_q()
56 up2tc = rd32(&pf->hw, PRTDCB_TUP2TC); in ice_is_pfc_causing_hung_q()
70 ref_prio_xoff[i] = pf->stats.priority_xoff_rx[i]; in ice_is_pfc_causing_hung_q()
76 if (pf->stats.priority_xoff_rx[i] > ref_prio_xoff[i]) in ice_is_pfc_causing_hung_q()
83 * ice_dcb_get_mode - gets the DCB mode
[all …]
/linux/mm/
H A DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
33 compress them into a dynamically allocated RAM-based memory pool.
160 zsmalloc is a slab-based memory allocator designed to store
175 int "Maximum number of physical pages per-zspage"
252 specifically-sized allocations with user-controlled contents
256 user-controlled allocations. This may very slightly increase
258 of extra pages since the bulk of user-controlled allocations
259 are relatively long-lived.
274 Try running: slabinfo -DA
311 utilization of a direct-mapped memory-side-cache. See section
[all …]
/linux/fs/crypto/
H A Dinline_crypt.c1 // SPDX-License-Identifier: GPL-2.0
11 * crypto API. See Documentation/block/inline-encryption.rst. fscrypt still
15 #include <linux/blk-crypto.h>
30 if (sb->s_cop->get_devices) { in fscrypt_get_devices()
31 devs = sb->s_cop->get_devices(sb, num_devs); in fscrypt_get_devices()
37 return ERR_PTR(-ENOMEM); in fscrypt_get_devices()
38 devs[0] = sb->s_bdev; in fscrypt_get_devices()
45 const struct super_block *sb = ci->ci_inode->i_sb; in fscrypt_get_dun_bytes()
46 unsigned int flags = fscrypt_policy_flags(&ci->ci_policy); in fscrypt_get_dun_bytes()
59 dun_bits = fscrypt_max_file_dun_bits(sb, ci->ci_data_unit_bits); in fscrypt_get_dun_bytes()
[all …]
/linux/fs/xfs/libxfs/
H A Dxfs_rmap.c1 // SPDX-License-Identifier: GPL-2.0
50 cur->bc_rec.r.rm_startblock = bno; in xfs_rmap_lookup_le()
51 cur->bc_rec.r.rm_blockcount = 0; in xfs_rmap_lookup_le()
52 cur->bc_rec.r.rm_owner = owner; in xfs_rmap_lookup_le()
53 cur->bc_rec.r.rm_offset = offset; in xfs_rmap_lookup_le()
54 cur->bc_rec.r.rm_flags = flags; in xfs_rmap_lookup_le()
65 return -EFSCORRUPTED; in xfs_rmap_lookup_le()
85 cur->bc_rec.r.rm_startblock = bno; in xfs_rmap_lookup_eq()
86 cur->bc_rec.r.rm_blockcount = len; in xfs_rmap_lookup_eq()
87 cur->bc_rec.r.rm_owner = owner; in xfs_rmap_lookup_eq()
[all …]

12345678910>>...20