Lines Matching full:descriptor
30 * The internal state information of a descriptor is the key element to allow
36 * Descriptor Ring
38 * The descriptor ring is an array of descriptors. A descriptor contains
41 * "Data Rings" below). Each descriptor is assigned an ID that maps
42 * directly to index values of the descriptor array and has a state. The ID
43 * and the state are bitwise combined into a single descriptor field named
54 * descriptor (transitioning it back to reserved), but in the committed
59 * writer cannot reopen the descriptor.
66 * descriptor to query. This can yield a possible fifth (pseudo) state:
69 * The descriptor being queried has an unexpected ID.
71 * The descriptor ring has a @tail_id that contains the ID of the oldest
72 * descriptor and @head_id that contains the ID of the newest descriptor.
74 * When a new descriptor should be created (and the ring is full), the tail
75 * descriptor is invalidated by first transitioning to the reusable state and
77 * associated with the tail descriptor (for the text ring). Then
79 * @state_var of the new descriptor is initialized to the new ID and reserved
86 * Descriptor Finalization
117 * the identifier of a descriptor that is associated with the data block. A
121 * 1) The descriptor associated with the data block is in the committed
124 * 2) The blk_lpos struct within the descriptor associated with the data
144 * descriptor. If a data block is not valid, the @tail_lpos cannot be
150 * stored in an array with the same number of elements as the descriptor ring.
151 * Each info corresponds to the descriptor of the same index in the
152 * descriptor ring. Info validity is confirmed by evaluating the corresponding
153 * descriptor before and after loading the info.
274 * push descriptor tail (id), then push descriptor head (id)
277 * push data tail (lpos), then set new descriptor reserved (state)
280 * push descriptor tail (id), then set new descriptor reserved (state)
283 * push descriptor tail (id), then set new descriptor reserved (state)
286 * set new descriptor id and reserved (state), then allow writer changes
289 * set old descriptor reusable (state), then modify new data block area
295 * store writer changes, then set new descriptor committed (state)
298 * set descriptor reserved (state), then read descriptor data
301 * set new descriptor committed (state), then check descriptor head (id)
304 * set descriptor reusable (state), then push data tail (lpos)
307 * set descriptor reusable (state), then push descriptor tail (id)
345 * @id: the ID of the associated descriptor
349 * descriptor.
357 * Return the descriptor associated with @n. @n can be either a
358 * descriptor ID or a sequence number.
367 * descriptor ID or a sequence number.
414 /* Query the state of a descriptor. */
425 * Get a copy of a specified descriptor and return its queried state. If the
426 * descriptor is in an inconsistent state (miss or reserved), the caller can
427 * only expect the descriptor's @state_var field to be valid.
430 * non-state_var data, they are only valid if the descriptor is in a
443 /* Check the descriptor state. */
448 * The descriptor is in an inconsistent state. Set at least
456 * Guarantee the state is loaded before copying the descriptor
457 * content. This avoids copying obsolete descriptor content that might
458 * not apply to the descriptor state. This pairs with _prb_commit:B.
474 * Copy the descriptor data. The data is not valid until the
488 * 1. Guarantee the descriptor content is loaded before re-checking
489 * the state. This avoids reading an obsolete descriptor state
505 * state. This avoids reading an obsolete descriptor state that may
528 * The data has been copied. Return the current descriptor state,
540 * Take a specified descriptor out of the finalized state by attempting
557 * Given the text data ring, put the associated descriptor of each
560 * If there is any problem making the associated descriptor reusable, either
561 * the descriptor has not yet been finalized or another writer context has
586 * area. If the loaded value matches a valid descriptor ID,
587 * the blk_lpos of that descriptor will be checked to make
603 * This data block is invalid if the descriptor
612 * This data block is invalid if the descriptor
645 * Any descriptor states that have transitioned to reusable due to the
668 * sees the new tail lpos, any descriptor states that transitioned to
705 * 2. Guarantee the descriptor state loaded in
709 * recycled descriptor causing the tail lpos to
745 * Guarantee any descriptor states that have transitioned to
748 * the descriptor states reusable. This pairs with
762 * descriptor, thus invalidating the oldest descriptor. Before advancing
763 * the tail, the tail descriptor is made reusable and all data blocks up to
764 * and including the descriptor's data block are invalidated (i.e. the data
765 * ring tail is pushed past the data block of the descriptor being made
791 * tail and recycled the descriptor already. Success is
808 * descriptor can be made available for recycling. Invalidating
810 * data blocks once their associated descriptor is gone.
817 * Check the next descriptor after @tail_id before pushing the tail
821 * A successful read implies that the next descriptor is less than or
830 * Guarantee any descriptor states that have transitioned to
832 * verifying the recycled descriptor state. A full memory
834 * descriptor states reusable. This pairs with desc_reserve:D.
842 * case that the descriptor has been recycled. This pairs
864 * Re-check the tail ID. The descriptor following @tail_id is
875 /* Reserve a new descriptor, invalidating the oldest if necessary. */
918 * Make space for the new descriptor by
927 * recycled descriptor state. A read memory barrier is
950 * recycling the descriptor. Data ring tail changes can
957 * descriptor. A full memory barrier is needed since
963 * finalize the previous descriptor. This pairs with
972 * If the descriptor has been recycled, verify the old state val.
983 * Assign the descriptor a new ID and set its state to reserved.
986 * Guarantee the new descriptor ID and state is stored before making
1023 * a specified descriptor.
1068 * 1. Guarantee any descriptor states that have transitioned
1071 * since other CPUs may have made the descriptor states
1108 * Try to resize an existing data block associated with the descriptor
1287 * Attempt to transition the newest descriptor from committed back to reserved
1289 * if the descriptor is not yet finalized and the provided @caller_id matches.
1304 * To reduce unnecessarily reopening, first check if the descriptor
1393 /* Transition the newest descriptor back to the reserved state. */
1413 * exclusive access at that point. The descriptor may have
1575 * Attempt to finalize a specified descriptor. If this fails, the descriptor
1636 /* Descriptor reservation failures are tracked. */
1679 * previous descriptor now so that it can be made available to
1680 * readers. (For seq==0 there is no previous descriptor.)
1716 * Set the descriptor as committed. See "ABA Issues" about why
1719 * 1 Guarantee all record data is stored before the descriptor state
1723 * 2. Guarantee the descriptor state is stored as committed before
1725 * descriptor. This pairs with desc_reserve:D.
1771 * If this descriptor is no longer the head (i.e. a new record has
1875 * descriptor. However, it also verifies that the record is finalized and has
1896 * does not exist. A descriptor in the reserved or committed state
1907 * A descriptor in the reusable state may no longer have its data
1936 /* Extract the ID, used to specify the descriptor to read. */
1939 /* Get a local copy of the correct descriptor (if available). */
1963 /* Get the sequence number of the tail descriptor. */
1987 * that the descriptor has been recycled. This pairs with
2070 /* Extract the ID, used to specify the descriptor to read. */
2091 * the descriptor ID of the first one (@seq=0) for
2101 /* Diff of known descriptor IDs to compute related sequence numbers. */
2315 * @descs: The descriptor buffer for ringbuffer records.