/linux/arch/arm/probes/kprobes/ |
H A D | core.h | diff 0dc016dbd820260b8ea74337980735b8c88d4ef2 Fri Jan 09 07:37:36 CET 2015 Wang Nan <wangnan0@huawei.com> ARM: kprobes: enable OPTPROBES for ARM 32
This patch introduce kprobeopt for ARM 32.
Limitations: - Currently only kernel compiled with ARM ISA is supported.
- Offset between probe point and optinsn slot must not larger than 32MiB. Masami Hiramatsu suggests replacing 2 words, it will make things complex. Futher patch can make such optimization.
Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because ARM instruction is always 4 bytes aligned and 4 bytes long. This patch replace probed instruction by a 'b', branch to trampoline code and then calls optimized_callback(). optimized_callback() calls opt_pre_handler() to execute kprobe handler. It also emulate/simulate replaced instruction.
When unregistering kprobe, the deferred manner of unoptimizer may leave branch instruction before optimizer is called. Different from x86_64, which only copy the probed insn after optprobe_template_end and reexecute them, this patch call singlestep to emulate/simulate the insn directly. Futher patch can optimize this behavior.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
H A D | Makefile | diff 0dc016dbd820260b8ea74337980735b8c88d4ef2 Fri Jan 09 07:37:36 CET 2015 Wang Nan <wangnan0@huawei.com> ARM: kprobes: enable OPTPROBES for ARM 32
This patch introduce kprobeopt for ARM 32.
Limitations: - Currently only kernel compiled with ARM ISA is supported.
- Offset between probe point and optinsn slot must not larger than 32MiB. Masami Hiramatsu suggests replacing 2 words, it will make things complex. Futher patch can make such optimization.
Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because ARM instruction is always 4 bytes aligned and 4 bytes long. This patch replace probed instruction by a 'b', branch to trampoline code and then calls optimized_callback(). optimized_callback() calls opt_pre_handler() to execute kprobe handler. It also emulate/simulate replaced instruction.
When unregistering kprobe, the deferred manner of unoptimizer may leave branch instruction before optimizer is called. Different from x86_64, which only copy the probed insn after optprobe_template_end and reexecute them, this patch call singlestep to emulate/simulate the insn directly. Futher patch can optimize this behavior.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
/linux/arch/arm/include/asm/ |
H A D | insn.h | 0dc016dbd820260b8ea74337980735b8c88d4ef2 Fri Jan 09 07:37:36 CET 2015 Wang Nan <wangnan0@huawei.com> ARM: kprobes: enable OPTPROBES for ARM 32
This patch introduce kprobeopt for ARM 32.
Limitations: - Currently only kernel compiled with ARM ISA is supported.
- Offset between probe point and optinsn slot must not larger than 32MiB. Masami Hiramatsu suggests replacing 2 words, it will make things complex. Futher patch can make such optimization.
Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because ARM instruction is always 4 bytes aligned and 4 bytes long. This patch replace probed instruction by a 'b', branch to trampoline code and then calls optimized_callback(). optimized_callback() calls opt_pre_handler() to execute kprobe handler. It also emulate/simulate replaced instruction.
When unregistering kprobe, the deferred manner of unoptimizer may leave branch instruction before optimizer is called. Different from x86_64, which only copy the probed insn after optprobe_template_end and reexecute them, this patch call singlestep to emulate/simulate the insn directly. Futher patch can optimize this behavior.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
H A D | kprobes.h | diff 0dc016dbd820260b8ea74337980735b8c88d4ef2 Fri Jan 09 07:37:36 CET 2015 Wang Nan <wangnan0@huawei.com> ARM: kprobes: enable OPTPROBES for ARM 32
This patch introduce kprobeopt for ARM 32.
Limitations: - Currently only kernel compiled with ARM ISA is supported.
- Offset between probe point and optinsn slot must not larger than 32MiB. Masami Hiramatsu suggests replacing 2 words, it will make things complex. Futher patch can make such optimization.
Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because ARM instruction is always 4 bytes aligned and 4 bytes long. This patch replace probed instruction by a 'b', branch to trampoline code and then calls optimized_callback(). optimized_callback() calls opt_pre_handler() to execute kprobe handler. It also emulate/simulate replaced instruction.
When unregistering kprobe, the deferred manner of unoptimizer may leave branch instruction before optimizer is called. Different from x86_64, which only copy the probed insn after optprobe_template_end and reexecute them, this patch call singlestep to emulate/simulate the insn directly. Futher patch can optimize this behavior.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
/linux/arch/arm/kernel/ |
H A D | Makefile | diff 0dc016dbd820260b8ea74337980735b8c88d4ef2 Fri Jan 09 07:37:36 CET 2015 Wang Nan <wangnan0@huawei.com> ARM: kprobes: enable OPTPROBES for ARM 32
This patch introduce kprobeopt for ARM 32.
Limitations: - Currently only kernel compiled with ARM ISA is supported.
- Offset between probe point and optinsn slot must not larger than 32MiB. Masami Hiramatsu suggests replacing 2 words, it will make things complex. Futher patch can make such optimization.
Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because ARM instruction is always 4 bytes aligned and 4 bytes long. This patch replace probed instruction by a 'b', branch to trampoline code and then calls optimized_callback(). optimized_callback() calls opt_pre_handler() to execute kprobe handler. It also emulate/simulate replaced instruction.
When unregistering kprobe, the deferred manner of unoptimizer may leave branch instruction before optimizer is called. Different from x86_64, which only copy the probed insn after optprobe_template_end and reexecute them, this patch call singlestep to emulate/simulate the insn directly. Futher patch can optimize this behavior.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
/linux/arch/arm/ |
H A D | Kconfig | diff 0dc016dbd820260b8ea74337980735b8c88d4ef2 Fri Jan 09 07:37:36 CET 2015 Wang Nan <wangnan0@huawei.com> ARM: kprobes: enable OPTPROBES for ARM 32
This patch introduce kprobeopt for ARM 32.
Limitations: - Currently only kernel compiled with ARM ISA is supported.
- Offset between probe point and optinsn slot must not larger than 32MiB. Masami Hiramatsu suggests replacing 2 words, it will make things complex. Futher patch can make such optimization.
Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because ARM instruction is always 4 bytes aligned and 4 bytes long. This patch replace probed instruction by a 'b', branch to trampoline code and then calls optimized_callback(). optimized_callback() calls opt_pre_handler() to execute kprobe handler. It also emulate/simulate replaced instruction.
When unregistering kprobe, the deferred manner of unoptimizer may leave branch instruction before optimizer is called. Different from x86_64, which only copy the probed insn after optprobe_template_end and reexecute them, this patch call singlestep to emulate/simulate the insn directly. Futher patch can optimize this behavior.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Jon Medhurst (Tixy) <tixy@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
|