/linux/samples/livepatch/ |
H A D | livepatch-callbacks-demo.c | 7 * livepatch-callbacks-demo.c - (un)patching callbacks livepatch demo 13 * Demonstration of registering livepatch (un)patching callbacks. 21 * insmod samples/livepatch/livepatch-callbacks-mod.ko 24 * Step 2 - load the demonstration livepatch (with callbacks) 26 * insmod samples/livepatch/livepatch-callbacks-demo.ko 38 * NOTE: swap the insmod order of livepatch-callbacks-mod.ko and 39 * livepatch-callbacks-demo.ko to observe what happens when a 40 * target module is loaded after a livepatch with callbacks. 47 * insmod samples/livepatch/livepatch-callbacks-demo.ko 53 * insmod samples/livepatch/livepatch-callbacks-mod.ko [all …]
|
H A D | livepatch-callbacks-mod.c | 7 * livepatch-callbacks-mod.c - (un)patching callbacks demo support module 13 * Simple module to demonstrate livepatch (un)patching callbacks. 20 * section of livepatch-callbacks-demo.c.
|
/linux/kernel/livepatch/ |
H A D | core.h | 30 if (obj->callbacks.pre_patch) in klp_pre_patch_callback() 31 ret = (*obj->callbacks.pre_patch)(obj); in klp_pre_patch_callback() 33 obj->callbacks.post_unpatch_enabled = !ret; in klp_pre_patch_callback() 40 if (obj->callbacks.post_patch) in klp_post_patch_callback() 41 (*obj->callbacks.post_patch)(obj); in klp_post_patch_callback() 46 if (obj->callbacks.pre_unpatch) in klp_pre_unpatch_callback() 47 (*obj->callbacks.pre_unpatch)(obj); in klp_pre_unpatch_callback() 52 if (obj->callbacks.post_unpatch_enabled && in klp_post_unpatch_callback() 53 obj->callbacks.post_unpatch) in klp_post_unpatch_callback() 54 (*obj->callbacks.post_unpatch)(obj); in klp_post_unpatch_callback() [all …]
|
/linux/kernel/rcu/ |
H A D | rcu_segcblist.c | 97 /* Return number of callbacks in segmented callback list by summing seglen. */ 146 * field to disagree with the actual number of callbacks on the structure. 155 * This can of course race with both queuing and invoking of callbacks. 157 * rcu_barrier() failing to IPI a CPU that actually had callbacks queued 168 * CASE 1: Suppose that CPU 0 has no callbacks queued, but invokes 226 * callbacks on the structure. This increase is fully ordered with respect 253 * Disable the specified rcu_segcblist structure, so that callbacks can 264 * Does the specified rcu_segcblist structure contain callbacks that 274 * Does the specified rcu_segcblist structure contain callbacks that 326 * for rcu_barrier() to sometimes post callbacks needlessly, but [all …]
|
H A D | rcu_segcblist.h | 12 /* Return number of callbacks in the specified callback list. */ 20 /* Return number of callbacks in segmented callback list by summing seglen. */ 34 * necessarily imply that there are no callbacks associated with 35 * this structure. When callbacks are being invoked, they are 37 * the remaining callbacks will be added back to the list. Either 48 /* Return number of callbacks in segmented callback list. */ 100 * rcu_segcblist structure empty of callbacks? (The specified 101 * segment might well contain callbacks.) 110 * empty of callbacks?
|
/linux/Documentation/driver-api/usb/ |
H A D | callbacks.rst | 1 USB core callbacks 4 What callbacks will usbcore do? 7 Usbcore will call into a driver through callbacks defined in the driver 10 callbacks are completely independent of each other. Information on the 13 The callbacks defined in the driver structure are: 15 1. Hotplugging callbacks: 34 3. Power management (PM) callbacks: 55 reason. Sysfs is preferred these days. The PM callbacks are covered 61 All callbacks are mutually exclusive. There's no need for locking 62 against other USB callbacks. All callbacks are called from a task [all …]
|
/linux/Documentation/livepatch/ |
H A D | callbacks.rst | 2 (Un)patching Callbacks 5 Livepatch (un)patch-callbacks provide a mechanism for livepatch modules 16 In most cases, (un)patch callbacks will need to be used in conjunction 23 Callbacks differ from existing kernel facilities: 30 Callbacks are part of the klp_object structure and their implementation 37 Callbacks can be registered for the following livepatch actions: 61 symmetry: pre-patch callbacks have a post-unpatch counterpart and 62 post-patch callbacks have a pre-unpatch counterpart. An unpatch 69 in-kernel vmlinux targets, this means that callbacks will always execute 71 callbacks will only execute if the target module is loaded. When a [all …]
|
H A D | cumulative-patches.rst | 70 extra modifications in (un)patching callbacks or in the module_init() 77 - Only the (un)patching callbacks from the _new_ cumulative livepatch are 78 executed. Any callbacks from the replaced patches are ignored. 84 older ones. The old livepatches might not provide the necessary callbacks. 92 the various callbacks and their interactions if the callbacks from all
|
/linux/Documentation/core-api/ |
H A D | cpu_hotplug.rst | 133 Once a CPU has been logically shutdown the teardown callbacks of registered 158 When a CPU is onlined, the startup callbacks are invoked sequentially until 160 callbacks of a state are set up or an instance is added to a multi-instance 163 When a CPU is offlined the teardown callbacks are invoked in the reverse 165 be invoked when the callbacks of a state are removed or an instance is 179 The startup callbacks in this section are invoked before the CPU is 180 started during a CPU online operation. The teardown callbacks are invoked 183 The callbacks are invoked on a control CPU as they can't obviously run on 187 The startup callbacks are used to setup resources which are required to 188 bring a CPU successfully online. The teardown callbacks are used to free [all …]
|
/linux/include/linux/ |
H A D | rcu_segcblist.h | 37 * Callbacks whose grace period has elapsed, and thus can be invoked. 39 * Callbacks waiting for the current GP from the current CPU's viewpoint. 41 * Callbacks that arrived before the next GP started, again from 44 * Callbacks that might have arrived after the next GP started. 54 * corresponding segment of callbacks will be ready to invoke. A given 56 * is non-empty, and it is never valid for RCU_DONE_TAIL (whose callbacks 57 * are already ready to invoke) or for RCU_NEXT_TAIL (whose callbacks have 74 * | Callbacks processed by rcu_core() from softirqs or local | 82 * | Callbacks processed by rcu_core() from softirqs or local | 91 * | CB kthread got unparked and processes callbacks concurrently with | [all …]
|
H A D | cpuhotplug.h | 27 * startup callbacks sequentially from CPUHP_OFFLINE + 1 to CPUHP_ONLINE 29 * installed teardown callbacks are invoked in the reverse order from 34 * PREPARE: The callbacks are invoked on a control CPU before the 37 * STARTING: The callbacks are invoked on the hotplugged CPU from the low level 40 * ONLINE: The callbacks are invoked on the hotplugged CPU from the per CPU 261 * cpuhp_setup_state - Setup hotplug state callbacks with calling the @startup 280 * cpuhp_setup_state_cpuslocked - Setup hotplug state callbacks with calling 301 * cpuhp_setup_state_nocalls - Setup hotplug state callbacks without calling the 321 * cpuhp_setup_state_nocalls_cpuslocked - Setup hotplug state callbacks without 324 * callbacks [all …]
|
H A D | powercap.h | 24 * struct powercap_control_type_ops - Define control type callbacks 37 * This structure defines control type callbacks to be implemented by client 75 * struct powercap_zone_ops - Define power zone callbacks 92 * This structure defines zone callbacks to be implemented by client drivers. 93 * Client drives can define both energy and power related callbacks. But at 95 * should handle mutual exclusion, if required in callbacks. 155 * struct powercap_zone_constraint_ops - Define constraint callbacks 166 * This structure is used to define the constraint callbacks for the client 167 * drivers. The following callbacks are mandatory and can't be NULL: 173 * Client drivers should handle mutual exclusion, if required in callbacks. [all …]
|
H A D | pm.h | 21 * Callbacks for platform drivers to implement. 63 * struct dev_pm_ops - device PM callbacks. 74 * followed by one of the suspend callbacks: @suspend(), @freeze(), or 86 * starting to invoke suspend callbacks for any of them, so generally 97 * all kinds of resume transitions, following one of the resume callbacks: 104 * the appropriate resume callbacks for all devices. If the corresponding 107 * executing any suspend and resume callbacks for it), @complete() will be 113 * callbacks have been executed for it. 207 * signal system wakeup by any of these callbacks. 248 * The externally visible transitions are handled with the help of callbacks [all …]
|
/linux/Documentation/driver-api/pm/ |
H A D | devices.rst | 272 executing callbacks for every device before the next phase begins. Not all 273 buses or classes support all these callbacks and not all drivers use all the 274 callbacks. The various phases always run after tasks have been frozen and 279 All phases use PM domain, bus, type, class or driver callbacks (that is, methods 281 ``dev->class->pm`` or ``dev->driver->pm``). These callbacks are regarded by the 282 PM core as mutually exclusive. Moreover, PM domain callbacks always take 283 precedence over all of the other callbacks and, for example, type callbacks take 284 precedence over bus, class and driver callbacks. To be precise, the following 300 This allows PM domains and device types to override callbacks provided by bus 303 The PM domain, type, class and bus callbacks may in turn invoke device- or [all …]
|
/linux/tools/testing/selftests/livepatch/ |
H A D | test-callbacks.sh | 20 # pre-patch callbacks are executed for vmlinux and $MOD_TARGET (those 22 # according to the klp_patch, their post-patch callbacks run and the 25 # - Similarly, on livepatch disable, pre-patch callbacks run before the 27 # callbacks execute and the transition completes. 67 # - On livepatch enable, only pre/post-patch callbacks are executed for 71 # pre/post-patch callbacks are executed. 74 # $MOD_TARGET) pre/post-unpatch callbacks are executed. 119 # post-unpatch callbacks are executed when this occurs. 121 # - When the livepatch is disabled, pre and post-unpatch callbacks are 166 # pre/post-patch callbacks are executed. [all …]
|
/linux/Documentation/power/ |
H A D | pci.rst | 272 2.1. Device Power Management Callbacks 280 pointers to several device power management callbacks:: 302 These callbacks are executed by the PM core in various situations related to 303 device power management and they, in turn, execute power management callbacks 309 that these callbacks operate on:: 363 Namely, it provides subsystem-level callbacks:: 431 management callbacks for this purpose. They are executed in phases such that 444 The following PCI bus type's callbacks, respectively, are used in these phases:: 459 pointers to the driver's callbacks), pci_pm_default_suspend() is called, which 483 device driver's callbacks executed before might do that), pci_pm_suspend_noirq() [all …]
|
/linux/Documentation/RCU/ |
H A D | UP.rst | 77 It is far better to guarantee that callbacks are invoked 85 What locking restriction must RCU callbacks respect? 90 permit call_rcu() to directly invoke callbacks, but only if a full 91 grace period has elapsed since those callbacks were queued. This is 94 encouraged to avoid invoking callbacks from call_rcu(), thus obtaining 102 infrastructure *must* respect grace periods, and *must* invoke callbacks 123 What locking restriction must RCU callbacks respect? 134 then, since RCU callbacks can be invoked from softirq context, 140 callbacks acquire locks directly. However, a great many RCU 141 callbacks do acquire locks *indirectly*, for example, via
|
H A D | rcubarrier.rst | 34 If we unload the module while some RCU callbacks are pending, 35 the CPUs executing these callbacks are going to be severely 41 grace period to elapse, it does not wait for the callbacks to complete. 45 heavy RCU-callback load, then some of the callbacks might be deferred in 56 outstanding RCU callbacks to complete. Please note that rcu_barrier() 58 callbacks queued anywhere, rcu_barrier() is within its rights to return 63 1. Prevent any new RCU callbacks from being posted. 136 52 /* Wait for all RCU callbacks to fire. */ 149 Line 6 sets a global variable that prevents any RCU callbacks from 151 RCU callbacks rarely include calls to call_rcu(). However, the rcutorture [all …]
|
/linux/sound/drivers/opl3/ |
H A D | opl3_seq.c | 163 struct snd_seq_port_callback callbacks; in snd_opl3_synth_create_port() local 174 memset(&callbacks, 0, sizeof(callbacks)); in snd_opl3_synth_create_port() 175 callbacks.owner = THIS_MODULE; in snd_opl3_synth_create_port() 176 callbacks.use = snd_opl3_synth_use; in snd_opl3_synth_create_port() 177 callbacks.unuse = snd_opl3_synth_unuse; in snd_opl3_synth_create_port() 178 callbacks.event_input = snd_opl3_synth_event_input; in snd_opl3_synth_create_port() 179 callbacks.private_free = snd_opl3_synth_free_port; in snd_opl3_synth_create_port() 180 callbacks.private_data = opl3; in snd_opl3_synth_create_port() 186 opl3->chset->port = snd_seq_event_port_attach(opl3->seq_client, &callbacks, in snd_opl3_synth_create_port()
|
H A D | opl3_oss.c | 49 struct snd_seq_port_callback callbacks; in snd_opl3_oss_create_port() local 60 memset(&callbacks, 0, sizeof(callbacks)); in snd_opl3_oss_create_port() 61 callbacks.owner = THIS_MODULE; in snd_opl3_oss_create_port() 62 callbacks.event_input = snd_opl3_oss_event_input; in snd_opl3_oss_create_port() 63 callbacks.private_free = snd_opl3_oss_free_port; in snd_opl3_oss_create_port() 64 callbacks.private_data = opl3; in snd_opl3_oss_create_port() 70 opl3->oss_chset->port = snd_seq_event_port_attach(opl3->seq_client, &callbacks, in snd_opl3_oss_create_port()
|
/linux/tools/rcu/ |
H A D | rcu-cbs.py | 4 # Dump out the number of RCU callbacks outstanding. 7 # number of callbacks for the most heavily used flavor. 39 # Sum up RCU callbacks. 44 # print("CPU " + str(cpu) + " RCU callbacks: " + str(len)); 46 print("Number of RCU callbacks in flight: " + str(sum));
|
/linux/net/lapb/ |
H A D | lapb_iface.c | 140 const struct lapb_register_struct *callbacks) in lapb_register() argument 159 lapb->callbacks = callbacks; in lapb_register() 411 if (lapb->callbacks->connect_confirmation) in lapb_connect_confirmation() 412 lapb->callbacks->connect_confirmation(lapb->dev, reason); in lapb_connect_confirmation() 417 if (lapb->callbacks->connect_indication) in lapb_connect_indication() 418 lapb->callbacks->connect_indication(lapb->dev, reason); in lapb_connect_indication() 423 if (lapb->callbacks->disconnect_confirmation) in lapb_disconnect_confirmation() 424 lapb->callbacks->disconnect_confirmation(lapb->dev, reason); in lapb_disconnect_confirmation() 429 if (lapb->callbacks->disconnect_indication) in lapb_disconnect_indication() 430 lapb->callbacks->disconnect_indication(lapb->dev, reason); in lapb_disconnect_indication() [all …]
|
/linux/drivers/gpu/drm/amd/display/dc/dml2/dml21/ |
H A D | dml21_utils.c | 75 struct pipe_ctx *opp_head = dml_ctx->config.callbacks.get_opp_head(pipe); in find_pipe_regs_idx() 77 *pipe_regs_idx = dml_ctx->config.callbacks.get_odm_slice_index(opp_head); in find_pipe_regs_idx() 80 *pipe_regs_idx += dml_ctx->config.callbacks.get_mpc_slice_index(pipe); in find_pipe_regs_idx() 108 dc_main_stream = dml_ctx->config.callbacks.get_stream_from_id(context, main_stream_id); in dml21_find_dc_pipes_for_plane() 109 dc_main_stream_status = dml_ctx->config.callbacks.get_stream_status(context, dc_main_stream); in dml21_find_dc_pipes_for_plane() 118 …num_pipes = dml_ctx->config.callbacks.get_dpp_pipes_for_plane(dc_main_plane, &context->res_ctx, dc… in dml21_find_dc_pipes_for_plane() 121 …struct pipe_ctx *otg_master_pipe = dml_ctx->config.callbacks.get_otg_master_for_stream(&context->r… in dml21_find_dc_pipes_for_plane() 123 …num_pipes = dml_ctx->config.callbacks.get_opp_heads_for_otg_master(otg_master_pipe, &context->res_… in dml21_find_dc_pipes_for_plane() 127 …dc_phantom_stream = dml_ctx->config.svp_pstate.callbacks.get_paired_subvp_stream(context, dc_main_… in dml21_find_dc_pipes_for_plane() 129 …dc_phantom_stream_status = dml_ctx->config.callbacks.get_stream_status(context, dc_phantom_stream); in dml21_find_dc_pipes_for_plane() [all …]
|
/linux/drivers/gpu/drm/amd/display/dc/dml2/ |
H A D | dml2_mall_phantom.c | 54 ctx->config.svp_pstate.callbacks.get_pipe_subvp_type(context, pipe) == SUBVP_PHANTOM) { in dml2_helper_calculate_num_ways_for_subvp() 125 …ctx->config.svp_pstate.callbacks.release_dsc(&context->res_ctx, ctx->config.svp_pstate.callbacks.d… in merge_pipes_for_subvp() 256 …ctx->config.svp_pstate.callbacks.get_pipe_subvp_type(context, pipe) == SUBVP_NONE && refresh_rate … in assign_subvp_pipe() 320 ctx->config.svp_pstate.callbacks.get_pipe_subvp_type(state, pipe) == SUBVP_NONE) { in enough_pipes_for_subvp() 375 ctx->config.svp_pstate.callbacks.get_pipe_subvp_type(context, pipe) == SUBVP_MAIN) { in subvp_subvp_schedulable() 376 phantom = ctx->config.svp_pstate.callbacks.get_paired_subvp_stream(context, pipe->stream); in subvp_subvp_schedulable() 457 if (ctx->config.svp_pstate.callbacks.get_pipe_subvp_type(context, pipe) == SUBVP_MAIN) in dml2_svp_drr_schedulable() 461 phantom_stream = ctx->config.svp_pstate.callbacks.get_paired_subvp_stream(context, pipe->stream); in dml2_svp_drr_schedulable() 535 pipe_mall_type = ctx->config.svp_pstate.callbacks.get_pipe_subvp_type(context, pipe); in subvp_vblank_schedulable() 556 …phantom_stream = ctx->config.svp_pstate.callbacks.get_paired_subvp_stream(context, subvp_pipe->str… in subvp_vblank_schedulable() [all …]
|
/linux/block/ |
H A D | blk-stat.c | 15 struct list_head callbacks; member 62 list_for_each_entry_rcu(cb, &q->stats->callbacks, list) { in blk_stat_add() 149 list_add_tail_rcu(&cb->list, &q->stats->callbacks); in blk_stat_add_callback() 161 if (list_empty(&q->stats->callbacks) && !q->stats->accounting) in blk_stat_remove_callback() 189 if (!--q->stats->accounting && list_empty(&q->stats->callbacks)) in blk_stat_disable_accounting() 200 if (!q->stats->accounting++ && list_empty(&q->stats->callbacks)) in blk_stat_enable_accounting() 214 INIT_LIST_HEAD(&stats->callbacks); in blk_alloc_queue_stats() 226 WARN_ON(!list_empty(&stats->callbacks)); in blk_free_queue_stats()
|