Lines Matching +full:ips +full:- +full:supply

3 ------------------------------
7 as instructions executed, cachemisses suffered, or branches mis-predicted -
9 trigger interrupts when a threshold number of events have passed - and can
15 provides "virtual" 64-bit counters, regardless of the width of the
72 is divided into 3 bit-fields:
80 machine-specific.
119 will return -EINVAL.
121 More hw_event_types are supported as well, but they are CPU-specific
173 particular counter allowing one to take the round-robin scheduling effect
195 Such (and other) events will be recorded in a ring-buffer, which is
196 available to user-space using mmap() (see below).
213 'error' state, where reads return end-of-file (i.e. read() returns 0)
218 In future, this will allow sophisticated monitoring programs to supply
235 these events are recorded in the ring-buffer (see below).
238 This too is recorded in the ring-buffer (see below).
254 cpu == -1: the counter counts on all CPUs
256 (Note: the combination of 'pid == -1' and 'cpu == -1' is not valid.)
258 A 'pid > 0' and 'cpu == -1' counter is a per task counter that counts
263 A 'pid == -1' and 'cpu == x' counter is a per CPU counter that counts
264 all events on CPU-x. Per CPU counters need CAP_PERFMON or CAP_SYS_ADMIN
271 is created first, with group_fd = -1 in the sys_perf_event_open call
274 (A single counter on its own is created with group_fd = -1 and is
286 tracking are logged into a ring-buffer. This ring-buffer is created and
289 The mmap size should be 1+2^n pages, where the first page is a meta-data page
291 as where the ring-buffer head is.
301 * Bits needed to read the hw counters in user-space.
307 * seq = pc->lock;
310 * if (pc->index) {
311 * count = pmc_read(pc->index - 1);
312 * count += pc->offset;
317 * } while (pc->lock != seq);
319 * NOTE: for obvious reason this only works on self-monitoring
329 * User-space reading this value should issue an rmb(), on SMP capable
330 * platforms, after reading this value -- see perf_event_wakeup().
335 NOTE: the hw-counter userspace bits are arch specific and are currently only
338 The following 2^n pages are the ring-buffer which contains events of the form:
354 * correlate userspace IPs to code. They have the following structure:
398 * u64 ips[nr]; } && PERF_RECORD_CALLCHAIN
413 Future work will include a splice() interface to the ring-buffer.
432 group other than the leader only affects that counter - disabling an
433 non-leader stops that counter from counting but doesn't affect any
436 Additionally, non-inherited overflow counters can use
456 -----------------
463 - asm/perf_event.h - a basic stub will suffice at first
464 - support for atomic64 types (and associated helper functions)
469 Architectures that have d-cache aliassing issues, such as Sparc and ARM,