Lines Matching full:thus
5 * device (ETE) thus generating required trace data. Trace can be enabled
33 * sinks and thus we use ETE trace packets to pad the
82 * a trace session. Thus we need a quicker access to per-CPU
272 * ETE and thus there might be some amount of trace that was in trbe_report_wrap_event()
382 * within the buffer. Thus we ensure there is at least an extra in trbe_min_trace_buf_size()
425 * and skip this section thus advancing the head. in __trbe_normal_offset()
669 * Thus the check TRBPTR == TRBBASER will not be honored. in trbe_get_fault_act()
697 * erratum which forces the PAGE_SIZE alignment on the TRBPTR, and thus in trbe_get_trace_size()
699 * 64bytes. Thus we ignore the potential triggering of the erratum in trbe_get_trace_size()
709 * of the ring buffer. Thus use the beginning of the ring in trbe_get_trace_size()
817 * driver skips it. Thus, just pass in 0 size here to indicate that the in arm_trbe_update_buffer()
877 * Thus, we could loose some amount of the trace at the base. in trbe_apply_work_around_before_enable()
913 * Thus it is easier to ignore those bytes than to complicate the in trbe_apply_work_around_before_enable()
918 * Thus the full workaround will move the BASE and the PTR and would in trbe_apply_work_around_before_enable()
1082 * thus leave the TRBE disabled. The etm-perf driver in trbe_handle_overflow()
1348 * Thus make sure we always align our write pointer to a PAGE_SIZE, in arm_trbe_probe_cpu()
1350 * the buffer (TRBLIMITR is PAGE aligned) and thus we can skip in arm_trbe_probe_cpu()