Lines Matching full:compression
18 `zstd` is a fast lossless compression algorithm and data compression tool,
21 `zstd` offers highly configurable compression speed,
23 and strong modes nearing lzma compression ratios.
92 Benchmark file(s) using compression level #
104 `#` compression level \[1-19] (default: 3)
106 unlocks high compression levels 20+ (maximum 22), using a lot more memory.
109 switch to ultra-fast compression levels.
111 The higher the value, the faster the compression speed,
112 at the cost of some compression ratio.
113 This setting overwrites compression level if one was set previously.
114 Similarly, if a compression level is set after `--fast`, it overrides it.
122 Does not spawn a thread for compression, use a single thread for both I/O and compression.
123 In this mode, compression is serialized with I/O, which is slightly slower.
124 (This is different from `-T1`, which spawns 1 compression thread in parallel of I/O).
132 `zstd` will dynamically adapt compression level to perceived I/O conditions.
133 Compression level adaptation can be observed live by using command `-v`.
146 This setting is designed to improve the compression ratio for files with
155 This is effectively dictionary compression with some convenient parameter
163 to improve compression ratio at the cost of speed
164 Note: for level 19, you can get increased compression ratio at the cost
168 `zstd` will periodically synchronize the compression state to make the
170 compression ratio, and the faster compression levels will see a small
171 compression speed hit.
182 do not store dictionary ID within frame header (dictionary compression).
191 This is also used during compression when using with --patch-from=. In this case,
200 This information will be used to better optimize compression parameters, resulting in
201 better and potentially faster compression, especially for smaller source sizes.
204 will be when optimizing compression parameters. If the stream size is relatively
205 small, this guess may be a poor one, resulting in a higher compression ratio than
207 Exact guesses result in better compression ratios. Overestimates result in slightly
208 degraded compression ratios, while underestimates may result in significant degradation.
225 remove source file(s) after successful compression or decompression. If used in combination with
228 keep source file(s) after successful compression or decompression.
258 support, zstd can compress to or decompress from other compression algorithm
276 Shows the default compression parameters that will be used for a
288 They set the compression level and number of threads to use during compression, respectively.
292 `ZSTD_CLEVEL` just replaces the default compression level (`3`).
294 …_NBTHREADS` can be used to set the number of threads `zstd` will attempt to use during compression.
300 `-#` for compression level and `-T#` for number of compression threads.
305 `zstd` offers _dictionary_ compression,
309 Then during compression and decompression, reference the same dictionary,
311 Compression of small files similar to the sample set will be greatly improved.
331 Use `#` compression level during training (optional).
332 Will generate statistics more tuned for selected compression level,
333 resulting in a _small_ compression ratio improvement for this level.
369 in size until compression ratio of the truncated dictionary is at most
370 _shrinkDictMaxRegression%_ worse than the compression ratio of the largest dictionary.
426 benchmark file(s) using compression level #
428 benchmark file(s) using multiple compression levels, from `-b#` to `-e#` (inclusive)
438 **Methodology:** For both compression and decompression speed, the entire input is compressed/decom…
440 ADVANCED COMPRESSION OPTIONS
443 Select the size of each compression job.
445 Each compression job is run in parallel, so this value indirectly impacts the nb of active threads.
446 Default job size varies depending on compression level (generally `4 * windowSize`).
453 `zstd` provides 22 predefined compression levels.
454 The selected or default predefined compression level can be changed with
455 advanced compression options.
458 taken from the selected or default compression level.
473 improves compression ratio.
484 Bigger hash tables cause less collisions which usually makes compression
485 faster, but requires more memory during compression.
493 improves compression ratio.
494 It also slows down compression speed and increases memory requirements for
495 compression.
506 compression ratio but decreases compression speed.
513 Larger search lengths usually decrease compression ratio but improve
523 A larger `targetLength` usually improves compression ratio
524 but decreases compression speed.
528 Impact is reversed : a larger `targetLength` increases compression speed
529 but decreases compression ratio.
538 Reloading more data improves compression ratio, but decreases speed.
553 Bigger hash tables usually improve compression ratio at the expense of more
554 memory during compression and a decrease in compression speed.
563 Larger/very small values usually decrease compression ratio.
573 Larger bucket sizes improve collision resolution but decrease compression
584 Larger values will improve compression speed. Deviating far from the
585 default value will likely result in a decrease in compression ratio.
590 The following parameters sets advanced compression options to something