Lines Matching +full:auto +full:- +full:detects
1 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
6 * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
100 * @brief **libbpf_set_print()** sets user-provided log callback function to
108 * This function is thread-safe.
119 * - for object open from file, this will override setting object
121 * - for object open from memory buffer, this will specify an object
122 * name and will override default "<addr>-<buf-size>" name;
125 /* parse map definitions non-strictly, allowing extra attributes/data */
129 * auto-pinned to that path on load; defaults to "/sys/fs/bpf".
139 /* Path to the custom BTF to be used for BPF CO-RE relocations.
141 * for the purpose of CO-RE relocations.
148 * passed-through to bpf() syscall. Keep in mind that kernel might
149 * fail operation with -ENOSPC error if provided buffer is too small
155 * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overridden
156 * with bpf_program__set_log() on per-program level, to get
158 * - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get
162 * previous contents, so if you need more fine-grained control, set
163 * per-program buffer with bpf_program__set_log_buf() to preserve each
177 * could be either libbpf's own auto-allocated log buffer, if
178 * kernel_log_buffer is NULL, or user-provided custom kernel_log_buf.
301 * @return BPF token FD or -1, if it wasn't set
348 * @brief **bpf_program__insns()** gives read-only access to BPF program's
362 * instructions will be CO-RE-relocated, BPF subprograms instructions will be
465 * a BPF program based on auto-detection of program type, attach type,
473 * - kprobe/kretprobe (depends on SEC() definition)
474 * - uprobe/uretprobe (depends on SEC() definition)
475 * - tracepoint
476 * - raw tracepoint
477 * - tracing programs (typed raw TP/fentry/fexit/fmod_ret)
485 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
501 * enum probe_attach_mode - the mode to attach kprobe/uprobe
503 * force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will
520 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
547 /* array of user-provided values fetchable through bpf_get_attach_cookie */
600 * - syms and offsets are mutually exclusive
601 * - ref_ctr_offsets and cookies are optional
606 * -1 for all processes
623 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
648 * system supports compat syscalls or defines 32-bit syscalls in 64-bit
653 * compat and 32-bit interfaces is required.
670 * a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe")
673 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
677 /* Function name to attach to. Could be an unqualified ("abc") or library-qualified
701 * -1 for all processes
719 * -1 for all processes
734 /* custom user-provided value accessible through usdt_cookie() */
742 * bpf_program__attach_uprobe_opts() except it covers USDT (User-space
744 * user-space function entry or exit.
748 * -1 for all processes
765 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
798 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
905 * auto-detection of attachment when programs are loaded.
921 /* Per-program log level and log buffer getters/setters.
931 * @brief **bpf_program__set_attach_target()** sets BTF-based attach target
933 * - BTF-aware raw tracepoints (tp_btf);
934 * - fentry/fexit/fmod_ret;
935 * - lsm;
936 * - freplace.
972 * @brief **bpf_map__set_autocreate()** sets whether libbpf has to auto-create
976 * @return 0 on success; -EBUSY if BPF object was already loaded
978 * **bpf_map__set_autocreate()** allows to opt-out from libbpf auto-creating
983 * This API allows to opt-out of this process for specific map instance. This
987 * BPF-side code that expects to use such missing BPF map is recognized by BPF
994 * @brief **bpf_map__set_autoattach()** sets whether libbpf has to auto-attach
1004 * auto-attach during BPF skeleton attach phase.
1006 * @return true if map is set to auto-attach during skeleton attach phase; false, otherwise
1014 * @return the file descriptor; or -EINVAL in case of an error
1042 * There is a special case for maps with associated memory-mapped regions, like
1045 * adjust the corresponding BTF info. This attempt is best-effort and can only
1138 * definition's **value_size**. For per-CPU BPF maps value size has to be
1141 * per-CPU values value size has to be aligned up to closest 8 bytes for
1147 * **bpf_map__lookup_elem()** is high-level equivalent of
1162 * definition's **value_size**. For per-CPU BPF maps value size has to be
1165 * per-CPU values value size has to be aligned up to closest 8 bytes for
1171 * **bpf_map__update_elem()** is high-level equivalent of
1187 * **bpf_map__delete_elem()** is high-level equivalent of
1201 * definition's **value_size**. For per-CPU BPF maps value size has to be
1204 * per-CPU values value size has to be aligned up to closest 8 bytes for
1210 * **bpf_map__lookup_and_delete_elem()** is high-level equivalent of
1225 * @return 0, on success; -ENOENT if **cur_key** is the last key in BPF map;
1228 * **bpf_map__get_next_key()** is high-level equivalent of
1341 * manager object. The index is 0-based and corresponds to the order in which
1371 * should still show the correct trend over the long-term.
1441 * @return A pointer to an 8-byte aligned reserved region of the user ring
1464 * should block when waiting for a sample. -1 causes the caller to block
1466 * @return A pointer to an 8-byte aligned reserved region of the user ring
1472 * If **timeout_ms** is -1, the function will block indefinitely until a sample
1473 * becomes available. Otherwise, **timeout_ms** must be non-negative, or errno
1549 * code to send data over to user-space
1550 * @param page_cnt number of memory pages allocated for each per-CPU buffer
1553 * @param ctx user-provided extra context passed into *sample_cb* and *lost_cb*
1564 LIBBPF_PERF_EVENT_ERROR = -1,
1565 LIBBPF_PERF_EVENT_CONT = -2,
1584 /* if cpu_cnt > 0, map_keys specify map keys to set per-CPU FDs for */
1604 * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
1643 * @brief **libbpf_probe_bpf_prog_type()** detects if host kernel supports
1656 * @brief **libbpf_probe_bpf_map_type()** detects if host kernel supports
1669 * @brief **libbpf_probe_bpf_helper()** detects if host kernel supports the
1831 * auto-attach is not supported, callback should return 0 and set link to
1843 /* User-provided value that is passed to prog_setup_fn,
1873 * @return Non-negative handler ID is returned on success. This handler ID has
1879 * - if *sec* is just a plain string (e.g., "abc"), it will match only
1882 * - if *sec* is of the form "abc/", proper SEC() form is
1885 * - if *sec* is of the form "abc+", it will successfully match both
1887 * - if *sec* is NULL, custom handler is registered for any BPF program that
1895 * (i.e., it's possible to have custom SEC("perf_event/LLC-load-misses")
1899 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs
1915 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs