| /linux/lib/xz/ |
| H A D | xz_dec_test.c | 51 static struct xz_buf buffers = { variable 74 buffers.in_pos = 0; in xz_dec_test_open() 75 buffers.in_size = 0; in xz_dec_test_open() 76 buffers.out_pos = 0; in xz_dec_test_open() 119 while ((remaining > 0 || buffers.out_pos == buffers.out_size) in xz_dec_test_write() 121 if (buffers.in_pos == buffers.in_size) { in xz_dec_test_write() 122 buffers.in_pos = 0; in xz_dec_test_write() 123 buffers.in_size = min(remaining, sizeof(buffer_in)); in xz_dec_test_write() 124 if (copy_from_user(buffer_in, buf, buffers.in_size)) in xz_dec_test_write() 127 buf += buffers.in_size; in xz_dec_test_write() [all …]
|
| /linux/Documentation/userspace-api/media/v4l/ |
| H A D | mmap.rst | 18 Streaming is an I/O method where only pointers to buffers are exchanged 20 mapping is primarily intended to map buffers in device memory into the 24 drivers support streaming as well, allocating buffers in DMA-able main 27 A driver can support many sets of buffers. Each set is identified by a 32 To allocate device buffers applications call the 34 of buffers and buffer type, for example ``V4L2_BUF_TYPE_VIDEO_CAPTURE``. 35 This ioctl can also be used to change the number of buffers or to free 36 the allocated memory, provided none of the buffers are still mapped. 38 Before applications can access the buffers they must map them into their 40 location of the buffers in device memory can be determined with the [all …]
|
| H A D | capture.c.rst | 52 struct buffer *buffers; 92 if (-1 == read(fd, buffers[0].start, buffers[0].length)) { 107 process_image(buffers[0].start, buffers[0].length); 133 process_image(buffers[buf.index].start, buf.bytesused); 161 if (buf.m.userptr == (unsigned long)buffers[i].start 162 && buf.length == buffers[i].length) 269 buf.m.userptr = (unsigned long)buffers[i].start; 270 buf.length = buffers[i].length; 288 free(buffers[0].start); 293 if (-1 == munmap(buffers[i].start, buffers[i].length)) [all …]
|
| H A D | userp.rst | 26 No buffers (planes) are allocated beforehand, consequently they are not 27 indexed and cannot be queried like mapped buffers with the 51 :ref:`VIDIOC_QBUF <VIDIOC_QBUF>` ioctl. Although buffers are commonly 60 Filled or displayed buffers are dequeued with the 66 Applications must take care not to free buffers without dequeuing. 67 Firstly, the buffers remain locked for longer, wasting physical memory. 73 buffers, to start capturing and enter the read loop. Here the 76 and enqueue buffers, when enough buffers are stacked up output is 78 buffers it must wait until an empty buffer can be dequeued and reused. 80 more buffers can be dequeued. By default :ref:`VIDIOC_DQBUF [all …]
|
| H A D | dev-decoder.rst | 13 from the client to process these buffers. 51 the destination buffer queue; for decoders, the queue of buffers containing 52 decoded frames; for encoders, the queue of buffers containing an encoded 55 into ``CAPTURE`` buffers. 85 ``OUTPUT`` buffers must be queued by the client in decode order; for 86 encoders ``CAPTURE`` buffers must be returned by the encoder in decode order. 93 buffers must be queued by the client in display order; for decoders, 94 ``CAPTURE`` buffers must be returned by the decoder in display order. 118 the source buffer queue; for decoders, the queue of buffers containing 119 an encoded bytestream; for encoders, the queue of buffers containing raw [all …]
|
| H A D | dmabuf.rst | 10 The DMABUF framework provides a generic method for sharing buffers 19 exporting V4L2 buffers as DMABUF file descriptors. 25 importing DMA buffers through DMABUF file descriptors is supported is 29 This I/O method is dedicated to sharing DMA buffers between different 32 application. Next, these buffers are exported to the application as file 63 buffers, every plane can be associated with a different DMABUF 64 descriptor. Although buffers are commonly cycled, applications can pass 121 Captured or displayed buffers are dequeued with the 129 buffers, to start capturing and enter the read loop. Here the 132 and enqueue buffers, when enough buffers are stacked up output is [all …]
|
| H A D | vidioc-create-bufs.rst | 13 VIDIOC_CREATE_BUFS - Create buffers for Memory Mapped or User Pointer or DMA Buffer I/O 34 This ioctl is used to create buffers for :ref:`memory mapped <mmap>` 38 over buffers is required. This ioctl can be called multiple times to 39 create buffers of different sizes. 41 To allocate the device buffers applications must initialize the relevant 43 ``count`` field must be set to the number of requested buffers, the 47 The ``format`` field specifies the image format that the buffers must be 54 sizes (for multi-planar formats) will be used for the allocated buffers. 58 The buffers created by this ioctl will have as minimum size the size 68 will attempt to allocate up to the requested number of buffers and store [all …]
|
| H A D | vidioc-reqbufs.rst | 36 Memory mapped buffers are located in device memory and must be allocated 38 space. User buffers are allocated by applications themselves, and this 40 to setup some internal structures. Similarly, DMABUF buffers are 45 To allocate device buffers applications initialize all fields of the 48 the desired number of buffers, ``memory`` must be set to the requested 51 allocate the requested number of buffers and it stores the actual number 54 number is also possible when the driver requires more buffers to 56 buffers, one displayed and one filled by the application. 62 buffers. Note that if any buffers are still mapped or exported via DMABUF, 66 If ``V4L2_BUF_CAP_SUPPORTS_ORPHANED_BUFS`` is set, then these buffers are [all …]
|
| H A D | vidioc-streamon.rst | 43 Capture hardware is disabled and no input buffers are filled (if there 44 are any empty buffers in the incoming queue) until ``VIDIOC_STREAMON`` 51 If ``VIDIOC_STREAMON`` fails then any already queued buffers will remain 55 in progress, unlocks any user pointer buffers locked in physical memory, 56 and it removes all buffers from the incoming and outgoing queues. That 63 If buffers have been queued with :ref:`VIDIOC_QBUF` and 65 ``VIDIOC_STREAMON``, then those queued buffers will also be removed from 77 but ``VIDIOC_STREAMOFF`` will return queued buffers to their starting 95 The buffer ``type`` is not supported, or no buffers have been
|
| H A D | v4l2grab.c.rst | 68 struct buffer *buffers; 97 buffers = calloc(req.count, sizeof(*buffers)); 107 buffers[n_buffers].length = buf.length; 108 buffers[n_buffers].start = v4l2_mmap(NULL, buf.length, 112 if (MAP_FAILED == buffers[n_buffers].start) { 157 fwrite(buffers[buf.index].start, buf.bytesused, 1, fout); 166 v4l2_munmap(buffers[i].start, buffers[i].length);
|
| H A D | dev-encoder.rst | 158 desired size of ``CAPTURE`` buffers; the encoder may adjust it to 170 adjusted size of ``CAPTURE`` buffers. 308 coded video. It does *not* set the rate at which buffers arrive on the 366 buffers to be aligned to 1920x1088 for codecs with 16x16 macroblock 376 7. Allocate buffers for both ``OUTPUT`` and ``CAPTURE`` via 382 requested number of buffers to allocate; greater than zero. 394 actual number of buffers allocated. 398 The actual number of allocated buffers may differ from the ``count`` 404 To allocate more than the minimum number of OUTPUT buffers (for pipeline 406 control to get the minimum number of buffers required, and pass the [all …]
|
| H A D | dev-stateless-decoder.rst | 101 destination buffers parsed/decoded from the bytestream. 168 to obtain up-to-date information about the buffers size and layout. 170 6. Allocate source (bytestream) buffers via :c:func:`VIDIOC_REQBUFS` on 176 requested number of buffers to allocate; greater than zero. 187 actual number of buffers allocated. 190 minimum of required number of ``OUTPUT`` buffers for the given format and 192 to get the actual number of buffers allocated. 194 7. Allocate destination (raw format) buffers via :c:func:`VIDIOC_REQBUFS` on the 200 requested number of buffers to allocate; greater than zero. The client 201 is responsible for deducing the minimum number of buffers required [all …]
|
| /linux/fs/verity/ |
| H A D | enable.c | 75 struct block_buffer *buffers = &_buffers[1]; in build_merkle_tree() local 93 buffers[level].data = kzalloc(params->block_size, GFP_KERNEL); in build_merkle_tree() 94 if (!buffers[level].data) { in build_merkle_tree() 99 buffers[num_levels].data = root_hash; in build_merkle_tree() 100 buffers[num_levels].is_root_hash = true; in build_merkle_tree() 110 buffers[-1].filled = min_t(u64, params->block_size, in build_merkle_tree() 112 bytes_read = __kernel_read(filp, buffers[-1].data, in build_merkle_tree() 113 buffers[-1].filled, &pos); in build_merkle_tree() 119 if (bytes_read != buffers[-1].filled) { in build_merkle_tree() 124 err = hash_one_block(params, &buffers[-1]); in build_merkle_tree() [all …]
|
| /linux/drivers/iio/buffer/ |
| H A D | industrialio-hw-consumer.c | 23 struct list_head buffers; member 58 list_for_each_entry(buf, &hwc->buffers, head) { in iio_hw_consumer_get_buffer() 72 list_add_tail(&buf->head, &hwc->buffers); in iio_hw_consumer_get_buffer() 94 INIT_LIST_HEAD(&hwc->buffers); in iio_hw_consumer_alloc() 116 list_for_each_entry(buf, &hwc->buffers, head) in iio_hw_consumer_alloc() 134 list_for_each_entry_safe(buf, n, &hwc->buffers, head) in iio_hw_consumer_free() 183 list_for_each_entry(buf, &hwc->buffers, head) { in iio_hw_consumer_enable() 192 list_for_each_entry_continue_reverse(buf, &hwc->buffers, head) in iio_hw_consumer_enable() 206 list_for_each_entry(buf, &hwc->buffers, head) in iio_hw_consumer_disable()
|
| /linux/Documentation/userspace-api/media/dvb/ |
| H A D | dmx-reqbufs.rst | 38 Memory mapped buffers are located in device memory and must be allocated 40 space. User buffers are allocated by applications themselves, and this 42 to setup some internal structures. Similarly, DMABUF buffers are 47 To allocate device buffers applications initialize all fields of the 49 to the desired number of buffers, and ``size`` to the size of each 53 attempt to allocate the requested number of buffers and it stores the actual 55 number is also possible when the driver requires more buffers to 63 buffers, however this cannot succeed when any buffers are still mapped. 64 A ``count`` value of zero frees all buffers, after aborting or finishing
|
| /linux/drivers/staging/media/starfive/camss/ |
| H A D | stf-capture.c | 79 struct stf_v_buf *output = &cap->buffers; in stf_init_addrs() 113 struct stf_v_buf *output = &cap->buffers; in stf_cap_s_cfg() 138 struct stf_v_buf *output = &cap->buffers; in stf_cap_s_cleanup() 244 cap->buffers.state = STF_OUTPUT_OFF; in stf_capture_init() 245 cap->buffers.buf[0] = NULL; in stf_capture_init() 246 cap->buffers.buf[1] = NULL; in stf_capture_init() 247 cap->buffers.active_buf = 0; in stf_capture_init() 248 atomic_set(&cap->buffers.frame_skip, 4); in stf_capture_init() 249 INIT_LIST_HEAD(&cap->buffers.pending_bufs); in stf_capture_init() 250 INIT_LIST_HEAD(&cap->buffers.ready_bufs); in stf_capture_init() [all …]
|
| /linux/drivers/scsi/isci/ |
| H A D | unsolicited_frame_control.c | 110 uf = &uf_control->buffers.array[i]; in sci_unsolicited_frame_control_construct() 136 *frame_header = &uf_control->buffers.array[frame_index].header->data; in sci_unsolicited_frame_control_get_header() 149 *frame_buffer = uf_control->buffers.array[frame_index].buffer; in sci_unsolicited_frame_control_get_buffer() 184 uf_control->buffers.array[frame_index].state = UNSOLICITED_FRAME_RELEASED; in sci_unsolicited_frame_control_release_frame() 198 while (uf_control->buffers.array[frame_get].state == UNSOLICITED_FRAME_RELEASED) { in sci_unsolicited_frame_control_release_frame() 199 uf_control->buffers.array[frame_get].state = UNSOLICITED_FRAME_EMPTY; in sci_unsolicited_frame_control_release_frame()
|
| /linux/Documentation/ABI/testing/ |
| H A D | sysfs-kernel-dmabuf-buffers | 1 What: /sys/kernel/dmabuf/buffers 5 Description: The /sys/kernel/dmabuf/buffers directory contains a 7 /sys/kernel/dmabuf/buffers/<inode_number> will contain the 12 What: /sys/kernel/dmabuf/buffers/<inode_number>/exporter_name 19 What: /sys/kernel/dmabuf/buffers/<inode_number>/size
|
| /linux/Documentation/admin-guide/media/ |
| H A D | cafe_ccic.rst | 37 buffers until the time comes to transfer data. If this option is set, 38 then worst-case-sized buffers will be allocated at module load time. 42 - dma_buf_size: The size of DMA buffers to allocate. Note that this 43 option is only consulted for load-time allocation; when buffers are 48 buffers. Normally, the driver tries to use three buffers; on faster 51 - min_buffers: The minimum number of streaming I/O buffers that the driver 56 - max_buffers: The maximum number of streaming I/O buffers; default is
|
| /linux/drivers/android/tests/ |
| H A D | binder_alloc_kunit.c | 166 struct binder_buffer *buffers[], in binder_alloc_test_alloc_buf() argument 173 buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); in binder_alloc_test_alloc_buf() 174 if (IS_ERR(buffers[i]) || in binder_alloc_test_alloc_buf() 175 !check_buffer_pages_allocated(test, alloc, buffers[i], sizes[i])) in binder_alloc_test_alloc_buf() 184 struct binder_buffer *buffers[], in binder_alloc_test_free_buf() argument 191 binder_alloc_free_buf(alloc, buffers[seq[i]]); in binder_alloc_test_free_buf() 235 struct binder_buffer *buffers[BUFFER_NUM]; in binder_alloc_test_alloc_free() local 239 failures = binder_alloc_test_alloc_buf(test, alloc, buffers, in binder_alloc_test_alloc_free() 247 failures = binder_alloc_test_free_buf(test, alloc, buffers, in binder_alloc_test_alloc_free() 256 failures = binder_alloc_test_alloc_buf(test, alloc, buffers, in binder_alloc_test_alloc_free() [all …]
|
| /linux/kernel/trace/ |
| H A D | ring_buffer.c | 63 int buffers[]; member 581 struct ring_buffer_per_cpu **buffers; member 735 struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[smp_processor_id()]; in ring_buffer_event_time_stamp() 776 read = local_read(&buffer->buffers[cpu]->pages_read); in ring_buffer_nr_dirty_pages() 777 lost = local_read(&buffer->buffers[cpu]->pages_lost); in ring_buffer_nr_dirty_pages() 778 cnt = local_read(&buffer->buffers[cpu]->pages_touched); in ring_buffer_nr_dirty_pages() 796 struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu]; in full_hit() 870 if (WARN_ON_ONCE(!buffer->buffers)) in ring_buffer_wake_waiters() 875 cpu_buffer = buffer->buffers[cpu]; in ring_buffer_wake_waiters() 895 cpu_buffer = buffer->buffers[cpu]; in rb_watermark_hit() [all …]
|
| /linux/lib/reed_solomon/ |
| H A D | decode_rs.c | 32 uint16_t *lambda = rsc->buffers + RS_DECODE_LAMBDA * (nroots + 1); 33 uint16_t *syn = rsc->buffers + RS_DECODE_SYN * (nroots + 1); 34 uint16_t *b = rsc->buffers + RS_DECODE_B * (nroots + 1); 35 uint16_t *t = rsc->buffers + RS_DECODE_T * (nroots + 1); 36 uint16_t *omega = rsc->buffers + RS_DECODE_OMEGA * (nroots + 1); 37 uint16_t *root = rsc->buffers + RS_DECODE_ROOT * (nroots + 1); 38 uint16_t *reg = rsc->buffers + RS_DECODE_REG * (nroots + 1); 39 uint16_t *loc = rsc->buffers + RS_DECODE_LOC * (nroots + 1);
|
| /linux/Documentation/filesystems/ |
| H A D | relay.rst | 12 as a set of per-cpu kernel buffers ('channel buffers'), each 14 clients write into the channel buffers using efficient write 19 are associated with the channel buffers using the API described below. 21 The format of the data logged into the channel buffers is completely 36 sub-buffers. Messages are written to the first sub-buffer until it is 38 the next (if available). Messages are never split across sub-buffers. 60 read sub-buffers; thus in cases where read(2) is being used to drain 61 the channel buffers, special-purpose communication between kernel and 96 allowing both to convey the state of buffers (full, empty, amount of 98 consumes the read sub-buffers; thus in cases where read(2) is being [all …]
|
| /linux/Documentation/userspace-api/media/mediactl/ |
| H A D | request-api.rst | 21 on the media pipeline, reconfigure it for the next frame, queue the buffers to 28 specific buffers. This allows user-space to schedule several tasks ("requests") 59 instead of being immediately applied, and buffers queued to a request do not 65 Once the configuration and buffers of the request are specified, it can be 72 output buffers, not for capture buffers. Attempting to add a capture buffer 77 buffers are processed. Media controller drivers do a best effort implementation 82 It is not allowed to mix queuing requests with directly queuing buffers: 99 once all its associated buffers are available for dequeuing and all the 102 dequeue its buffers: buffers that are available halfway through a request can 135 to queue many such buffers in advance. It can also take advantage of requests' [all …]
|
| /linux/Documentation/networking/device_drivers/ethernet/google/ |
| H A D | gve.rst | 62 Therefore, the packet buffers can be anywhere in guest memory. 125 The descriptor rings are power-of-two-sized ring buffers consisting of 131 Each queue's buffers must be registered in advance with the device as a 136 gve maps the buffers for transmit rings into a FIFO and copies the packets 141 The buffers for receive rings are put into a data ring that is the same 149 - TX and RX buffers queues, which send descriptors to the device, use MMIO 163 - TX packets have a 16 bit completion_tag and RX buffers have a 16 bit 169 A packet's buffers are DMA mapped for the device to access before transmission. 170 After the packet was successfully transmitted, the buffers are unmapped. 174 The driver posts fixed sized buffers to HW on the RX buffer queue. The packet
|