Lines Matching full:thus

5  * device (ETE) thus generating required trace data. Trace can be enabled
32 * sinks and thus we use ETE trace packets to pad the
81 * a trace session. Thus we need a quicker access to per-CPU
269 * ETE and thus there might be some amount of trace that was in trbe_report_wrap_event()
379 * within the buffer. Thus we ensure there is at least an extra in trbe_min_trace_buf_size()
422 * and skip this section thus advancing the head. in __trbe_normal_offset()
666 * Thus the check TRBPTR == TRBBASER will not be honored. in trbe_get_fault_act()
694 * erratum which forces the PAGE_SIZE alignment on the TRBPTR, and thus in trbe_get_trace_size()
696 * 64bytes. Thus we ignore the potential triggering of the erratum in trbe_get_trace_size()
706 * of the ring buffer. Thus use the beginning of the ring in trbe_get_trace_size()
814 * driver skips it. Thus, just pass in 0 size here to indicate that the in arm_trbe_update_buffer()
874 * Thus, we could loose some amount of the trace at the base. in trbe_apply_work_around_before_enable()
910 * Thus it is easier to ignore those bytes than to complicate the in trbe_apply_work_around_before_enable()
915 * Thus the full workaround will move the BASE and the PTR and would in trbe_apply_work_around_before_enable()
1079 * thus leave the TRBE disabled. The etm-perf driver in trbe_handle_overflow()
1335 * Thus make sure we always align our write pointer to a PAGE_SIZE, in arm_trbe_probe_cpu()
1337 * the buffer (TRBLIMITR is PAGE aligned) and thus we can skip in arm_trbe_probe_cpu()