Home
last modified time | relevance | path

Searched full:sequential (Results 1 – 25 of 336) sorted by relevance

12345678910>>...14

/linux/Documentation/admin-guide/device-mapper/
H A Ddm-zoned.rst9 doing raw block device accesses) the sequential write constraints of
37 dm-zoned implements an on-disk buffering scheme to handle non-sequential
38 write accesses to the sequential zones of a zoned block device.
52 sequential zones used exclusively to store user data. The conventional
55 later moved to a sequential zone so that the conventional zone can be
85 sequential zone, the write operation is processed directly only if the
87 offset within of the sequential data zone (i.e. the write operation is
92 automatically invalidate the same block in the sequential zone mapping
93 the chunk. If all blocks of the sequential zone become invalid, the zone
100 the sequential zone mapping a chunk, or if the chunk is buffered, from
[all …]
/linux/Documentation/filesystems/
H A Dzonefs.rst12 device support (e.g. f2fs), zonefs does not hide the sequential write
13 constraint of zoned block devices to the user. Files representing sequential
40 * Sequential zones: these zones accept random reads but must be written
41 sequentially. Each sequential zone has a write pointer maintained by the
43 to the device. As a result of this write constraint, LBAs in a sequential zone
44 cannot be overwritten. Sequential zones must first be erased using a special
78 the zone containing the super block is a sequential zone, the mkzonefs format
94 For sequential write zones, the sub-directory "seq" is used.
132 Sequential zone files
135 The size of sequential zone files grouped in the "seq" sub-directory represents
[all …]
/linux/tools/perf/pmu-events/arch/x86/elkhartlake/
H A Dfrontend.json56 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
65 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
74 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
/linux/tools/perf/pmu-events/arch/x86/snowridgex/
H A Dfrontend.json56 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
65 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
74 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
/linux/tools/perf/pmu-events/arch/x86/alderlaken/
H A Dfrontend.json16 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
25 …en accesses from sequential code crosses the cache line boundary, or when a branch target is moved…
/linux/drivers/md/
H A Ddm-zone.c222 * Count the total number of and the number of mapped sequential zones of a
276 * number of mapped sequential zones: if this number is smaller than the in device_get_zone_resource_limits()
277 * total number of sequential zones of the target device, then resource in device_get_zone_resource_limits()
287 * If the target does not map any sequential zones, then we do not need in device_get_zone_resource_limits()
294 * If the target does not map all sequential zones, the limits in device_get_zone_resource_limits()
303 * If the target maps less sequential zones than the limit values, then in device_get_zone_resource_limits()
319 * Also count the total number of sequential zones for the mapped in device_get_zone_resource_limits()
321 * able to check if the mapped device actually has any sequential zones. in device_get_zone_resource_limits()
362 * sequential write required zones mapped. in dm_set_zones_restrictions()
H A Ddm-zoned-reclaim.c57 * Align a sequential zone write pointer to chunk_block.
155 * If we are writing in a sequential zone, we must make sure in dmz_reclaim_copy()
156 * that writes are sequential. So Zeroout any eventual hole in dmz_reclaim_copy()
277 * Move valid blocks of the random data zone dzone into a free sequential zone.
278 * Once blocks are moved, remap the zone chunk to the sequential zone.
288 /* Get a free random or sequential zone */ in dmz_reclaim_rnd_data()
307 /* Flush the random data zone into the sequential zone */ in dmz_reclaim_rnd_data()
317 /* Free the sequential zone */ in dmz_reclaim_rnd_data()
391 * valid data blocks to a free sequential zone. in dmz_do_reclaim()
/linux/fs/zonefs/
H A Dfile.c78 * Sequential zones can only accept direct writes. This is already in zonefs_write_iomap_begin()
86 * For conventional zones, all blocks are always mapped. For sequential in zonefs_write_iomap_begin()
197 * Only sequential zone files can be truncated and truncation is allowed in zonefs_file_truncate()
268 * Since only direct writes are allowed in sequential files, page cache in zonefs_file_fsync()
319 * shared writable mappings. For sequential zone files, only read in zonefs_file_mmap()
339 * and below the zone write pointer for sequential zones. In both in zonefs_file_llseek()
448 * Handle direct writes. For sequential zone files, this is the only possible
451 * delivers write requests to the device in sequential order. This is always the
466 * For async direct IOs to sequential zone files, refuse IOCB_NOWAIT in zonefs_file_dio_write()
492 /* Enforce sequential writes (append only) in sequential zones */ in zonefs_file_dio_write()
[all …]
H A Dzonefs.h25 * Zone types: ZONEFS_ZTYPE_SEQ is used for all sequential zone types
65 /* Write pointer offset in the zone (sequential zones only, bytes) */
92 * sequential file truncation, two locks are used. For serializing
97 * a sequential file size on completion of direct IO writes.
/linux/tools/perf/tests/
H A Dbuiltin-test.c43 static bool sequential = true; variable
381 if (err || !sequential) in start_test()
447 /* TODO: if !sequential waitpid the already forked children. */ in __cmd_test()
467 if (!sequential) { in __cmd_test()
544 OPT_BOOLEAN('S', "sequential", &sequential, in cmd_test()
573 sequential = true; in cmd_test()
575 sequential = false; in cmd_test()
/linux/tools/testing/selftests/bpf/benchs/
H A Drun_bench_local_storage.sh11 summarize_local_storage "hashmap (control) sequential get: "\
19 summarize_local_storage "local_storage cache sequential get: "\
H A Dbench_local_storage.c241 /* cache sequential and interleaved get benchs test local_storage get
245 * cache sequential get: call bpf_task_storage_get on n maps in order
246 * cache interleaved get: like "sequential get", but interleave 4 calls to the
/linux/Documentation/admin-guide/
H A Dbcache.rst34 to caching large sequential IO. Bcache detects sequential IO and skips it;
353 By default, bcache doesn't cache everything. It tries to skip sequential IO -
370 slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So
376 cranking down the sequential bypass).
440 A sequential IO will bypass the cache once it passes this threshold; the
441 most recent 128 IOs are tracked so sequential IO can be detected even when
446 against all new requests to determine which new requests are sequential
447 continuations of previous requests for the purpose of determining sequential
448 cutoff. This is necessary if the sequential cutoff value is greater than the
449 maximum acceptable sequential size for any single request.
/linux/Documentation/ABI/testing/
H A Dsysfs-driver-intel-m10-bmc22 of sequential MAC addresses assigned to the board
32 Description: Read only. Returns the number of sequential MAC
/linux/tools/perf/pmu-events/arch/x86/sierraforest/
H A Dfrontend.json20 …nts every time the code stream enters into a new cache line by walking sequential from the previou…
28 …nts every time the code stream enters into a new cache line by walking sequential from the previou…
/linux/tools/perf/pmu-events/arch/x86/grandridge/
H A Dfrontend.json20 …nts every time the code stream enters into a new cache line by walking sequential from the previou…
28 …nts every time the code stream enters into a new cache line by walking sequential from the previou…
/linux/tools/perf/pmu-events/arch/x86/lunarlake/
H A Dfrontend.json3 …nts every time the code stream enters into a new cache line by walking sequential from the previou…
12 …nts every time the code stream enters into a new cache line by walking sequential from the previou…
/linux/mm/
H A Dreadahead.c32 * a subsequent readahead. Once a series of sequential reads has been
52 * discovered from the struct file_ra_state for simple sequential reads,
54 * sequential reads are interleaved. Specifically: where the readahead
69 * reads from there are often sequential. There are other minor
409 * In interleaved sequential reads, concurrent streams on the same fd can
418 * for sequential patterns. Hence interleaved reads might be served as
419 * sequential ones.
569 * A start of file, oversized read, or sequential cache miss: in page_cache_sync_ra()
583 * that a sequential stream would leave behind. in page_cache_sync_ra()
638 * It's the expected callback index, assume sequential access. in page_cache_async_ra()
/linux/arch/sh/include/asm/
H A Dsmp.h17 /* Map from cpu id to sequential logical cpu number. */
21 /* The reverse map from sequential logical cpu number to cpu id. */
/linux/tools/perf/pmu-events/arch/x86/amdzen4/
H A Dcache.json448 … pipeline which hit in the L2 cache of type L2Stream (fetch additional sequential lines into L2 ca…
466 …ich hit in the L2 cache of type L2Burst (aggressively fetch additional sequential lines into L2 ca…
478 … pipeline which hit in the L2 cache of type L1Stream (fetch additional sequential lines into L1 ca…
502 …he L2 cache and hit in the L3 cache of type L2Stream (fetch additional sequential lines into L2 ca…
520 …and hit in the L3 cache of type L2Burst (aggressively fetch additional sequential lines into L2 ca…
532 …he L2 cache and hit in the L3 cache of type L1Stream (fetch additional sequential lines into L1 ca…
556 …which miss the L2 and the L3 caches of type L2Stream (fetch additional sequential lines into L2 ca…
574 …he L2 and the L3 caches of type L2Burst (aggressively fetch additional sequential lines into L2 ca…
586 …which miss the L2 and the L3 caches of type L1Stream (fetch additional sequential lines into L1 ca…
/linux/include/linux/
H A Dseqlock_types.h28 * serialization and non-preemptibility requirements, use a sequential
75 * Sequential locks (seqlock_t)
/linux/arch/loongarch/include/asm/
H A Dsmp.h58 /* Map from cpu id to sequential logical cpu number. This will only
63 /* The reverse map from sequential logical cpu number to cpu id. */
/linux/arch/powerpc/platforms/pseries/
H A Dof_helpers.c75 /* Get number-sequential-elements:encode-int */ in of_read_drc_info_cell()
80 /* Get sequential-increment:encode-int */ in of_read_drc_info_cell()
/linux/arch/mips/include/asm/
H A Dsmp.h38 /* Map from cpu id to sequential logical cpu number. This will only
43 /* The reverse map from sequential logical cpu number to cpu id. */
/linux/tools/testing/selftests/mm/
H A Dtest_vmalloc.sh53 echo "It runs all test cases on one single CPU with sequential order."
100 echo "sequential order"

12345678910>>...14