Lines Matching full:descriptor

29  * The internal state information of a descriptor is the key element to allow
35 * Descriptor Ring
37 * The descriptor ring is an array of descriptors. A descriptor contains
40 * "Data Rings" below). Each descriptor is assigned an ID that maps
41 * directly to index values of the descriptor array and has a state. The ID
42 * and the state are bitwise combined into a single descriptor field named
53 * descriptor (transitioning it back to reserved), but in the committed
58 * writer cannot reopen the descriptor.
65 * descriptor to query. This can yield a possible fifth (pseudo) state:
68 * The descriptor being queried has an unexpected ID.
70 * The descriptor ring has a @tail_id that contains the ID of the oldest
71 * descriptor and @head_id that contains the ID of the newest descriptor.
73 * When a new descriptor should be created (and the ring is full), the tail
74 * descriptor is invalidated by first transitioning to the reusable state and
76 * associated with the tail descriptor (for the text ring). Then
78 * @state_var of the new descriptor is initialized to the new ID and reserved
85 * Descriptor Finalization
116 * the identifier of a descriptor that is associated with the data block. A
120 * 1) The descriptor associated with the data block is in the committed
123 * 2) The blk_lpos struct within the descriptor associated with the data
143 * descriptor. If a data block is not valid, the @tail_lpos cannot be
149 * stored in an array with the same number of elements as the descriptor ring.
150 * Each info corresponds to the descriptor of the same index in the
151 * descriptor ring. Info validity is confirmed by evaluating the corresponding
152 * descriptor before and after loading the info.
273 * push descriptor tail (id), then push descriptor head (id)
276 * push data tail (lpos), then set new descriptor reserved (state)
279 * push descriptor tail (id), then set new descriptor reserved (state)
282 * push descriptor tail (id), then set new descriptor reserved (state)
285 * set new descriptor id and reserved (state), then allow writer changes
288 * set old descriptor reusable (state), then modify new data block area
294 * store writer changes, then set new descriptor committed (state)
297 * set descriptor reserved (state), then read descriptor data
300 * set new descriptor committed (state), then check descriptor head (id)
303 * set descriptor reusable (state), then push data tail (lpos)
306 * set descriptor reusable (state), then push descriptor tail (id)
344 * @id: the ID of the associated descriptor
348 * descriptor.
356 * Return the descriptor associated with @n. @n can be either a
357 * descriptor ID or a sequence number.
366 * descriptor ID or a sequence number.
417 /* Query the state of a descriptor. */
428 * Get a copy of a specified descriptor and return its queried state. If the
429 * descriptor is in an inconsistent state (miss or reserved), the caller can
430 * only expect the descriptor's @state_var field to be valid.
433 * non-state_var data, they are only valid if the descriptor is in a
446 /* Check the descriptor state. */ in desc_read()
451 * The descriptor is in an inconsistent state. Set at least in desc_read()
459 * Guarantee the state is loaded before copying the descriptor in desc_read()
460 * content. This avoids copying obsolete descriptor content that might in desc_read()
461 * not apply to the descriptor state. This pairs with _prb_commit:B. in desc_read()
477 * Copy the descriptor data. The data is not valid until the in desc_read()
491 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read()
492 * the state. This avoids reading an obsolete descriptor state in desc_read()
508 * state. This avoids reading an obsolete descriptor state that may in desc_read()
531 * The data has been copied. Return the current descriptor state, in desc_read()
543 * Take a specified descriptor out of the finalized state by attempting
560 * Given the text data ring, put the associated descriptor of each
563 * If there is any problem making the associated descriptor reusable, either
564 * the descriptor has not yet been finalized or another writer context has
589 * area. If the loaded value matches a valid descriptor ID, in data_make_reusable()
590 * the blk_lpos of that descriptor will be checked to make in data_make_reusable()
606 * This data block is invalid if the descriptor in data_make_reusable()
615 * This data block is invalid if the descriptor in data_make_reusable()
648 * Any descriptor states that have transitioned to reusable due to the in data_push_tail()
671 * sees the new tail lpos, any descriptor states that transitioned to in data_push_tail()
708 * 2. Guarantee the descriptor state loaded in in data_push_tail()
712 * recycled descriptor causing the tail lpos to in data_push_tail()
748 * Guarantee any descriptor states that have transitioned to in data_push_tail()
751 * the descriptor states reusable. This pairs with in data_push_tail()
765 * descriptor, thus invalidating the oldest descriptor. Before advancing
766 * the tail, the tail descriptor is made reusable and all data blocks up to
767 * and including the descriptor's data block are invalidated (i.e. the data
768 * ring tail is pushed past the data block of the descriptor being made
794 * tail and recycled the descriptor already. Success is in desc_push_tail()
811 * descriptor can be made available for recycling. Invalidating in desc_push_tail()
813 * data blocks once their associated descriptor is gone. in desc_push_tail()
820 * Check the next descriptor after @tail_id before pushing the tail in desc_push_tail()
824 * A successful read implies that the next descriptor is less than or in desc_push_tail()
833 * Guarantee any descriptor states that have transitioned to in desc_push_tail()
835 * verifying the recycled descriptor state. A full memory in desc_push_tail()
837 * descriptor states reusable. This pairs with desc_reserve:D. in desc_push_tail()
845 * case that the descriptor has been recycled. This pairs in desc_push_tail()
867 * Re-check the tail ID. The descriptor following @tail_id is in desc_push_tail()
878 /* Reserve a new descriptor, invalidating the oldest if necessary. */
921 * Make space for the new descriptor by in desc_reserve()
930 * recycled descriptor state. A read memory barrier is in desc_reserve()
953 * recycling the descriptor. Data ring tail changes can in desc_reserve()
960 * descriptor. A full memory barrier is needed since in desc_reserve()
966 * finalize the previous descriptor. This pairs with in desc_reserve()
975 * If the descriptor has been recycled, verify the old state val. in desc_reserve()
986 * Assign the descriptor a new ID and set its state to reserved. in desc_reserve()
989 * Guarantee the new descriptor ID and state is stored before making in desc_reserve()
1026 * a specified descriptor.
1062 * 1. Guarantee any descriptor states that have transitioned in data_alloc()
1065 * since other CPUs may have made the descriptor states in data_alloc()
1102 * Try to resize an existing data block associated with the descriptor
1271 * Attempt to transition the newest descriptor from committed back to reserved
1273 * if the descriptor is not yet finalized and the provided @caller_id matches.
1288 * To reduce unnecessarily reopening, first check if the descriptor in desc_reopen_last()
1377 /* Transition the newest descriptor back to the reserved state. */ in prb_reserve_in_last()
1397 * exclusive access at that point. The descriptor may have in prb_reserve_in_last()
1559 * Attempt to finalize a specified descriptor. If this fails, the descriptor
1620 /* Descriptor reservation failures are tracked. */ in prb_reserve()
1663 * previous descriptor now so that it can be made available to in prb_reserve()
1664 * readers. (For seq==0 there is no previous descriptor.) in prb_reserve()
1699 * Set the descriptor as committed. See "ABA Issues" about why in _prb_commit()
1702 * 1 Guarantee all record data is stored before the descriptor state in _prb_commit()
1706 * 2. Guarantee the descriptor state is stored as committed before in _prb_commit()
1708 * descriptor. This pairs with desc_reserve:D. in _prb_commit()
1754 * If this descriptor is no longer the head (i.e. a new record has in prb_commit()
1857 * descriptor. However, it also verifies that the record is finalized and has
1878 * does not exist. A descriptor in the reserved or committed state in desc_read_finalized_seq()
1889 * A descriptor in the reusable state may no longer have its data in desc_read_finalized_seq()
1918 /* Extract the ID, used to specify the descriptor to read. */ in prb_read()
1921 /* Get a local copy of the correct descriptor (if available). */ in prb_read()
1945 /* Get the sequence number of the tail descriptor. */
1969 * that the descriptor has been recycled. This pairs with in prb_first_seq()
2052 /* Extract the ID, used to specify the descriptor to read. */ in prb_next_reserve_seq()
2073 * the descriptor ID of the first one (@seq=0) for in prb_next_reserve_seq()
2083 /* Diff of known descriptor IDs to compute related sequence numbers. */ in prb_next_reserve_seq()
2293 * @descs: The descriptor buffer for ringbuffer records.