Lines Matching defs:lpos
277 * push data tail (lpos), then set new descriptor reserved (state)
292 * push data tail (lpos), then modify new data block area
304 * set descriptor reusable (state), then push data tail (lpos)
320 #define DATA_INDEX(data_ring, lpos) ((lpos) & DATA_SIZE_MASK(data_ring))
326 #define DATA_WRAPS(data_ring, lpos) ((lpos) >> (data_ring)->size_bits)
329 #define LPOS_DATALESS(lpos) ((lpos) & 1UL)
334 #define DATA_THIS_WRAP_START_LPOS(data_ring, lpos) \
335 ((lpos) & ~DATA_SIZE_MASK(data_ring))
562 * already pushed the tail lpos past the problematic data block. Regardless,
563 * on error the caller can re-load the tail lpos to determine the situation.
629 * Advance the data ring tail to at least @lpos. This function puts
633 static bool data_push_tail(struct printk_ringbuffer *rb, unsigned long lpos)
640 /* If @lpos is from a data-less block, there is nothing to do. */
641 if (LPOS_DATALESS(lpos))
665 * Loop until the tail lpos is at or beyond @lpos. This condition
668 * sees the new tail lpos, any descriptor states that transitioned to
671 while ((lpos - tail_lpos) - 1 < DATA_SIZE(data_ring)) {
674 * data blocks before @lpos.
676 if (!data_make_reusable(rb, tail_lpos, lpos, &next_lpos)) {
680 * reloading the tail lpos. The failed
682 * recycled data area causing the tail lpos to
707 * reloading the tail lpos. The failed
709 * recycled descriptor causing the tail lpos to
746 * reusable are stored before pushing the tail lpos. A full
1004 unsigned long lpos, unsigned int size)
1009 begin_lpos = lpos;
1010 next_lpos = lpos + size;
1036 * reader will recognize these special lpos values and handle
1075 * 2. Guarantee any updated tail lpos is stored before
1080 * load a new tail lpos. A full memory barrier is needed
1081 * since other CPUs may have updated the tail lpos. This
1225 * This function (used by readers) performs strict validation on the lpos