|
Message-Id: <85536-177443-curtm@phaethon> Date: Tue, 15 Jun 2021 16:42:10 +0000 From: Kurt Manucredo <fuzzybritches0@...il.com> To: ebiggers@...nel.org, syzbot+bed360704c521841c85d@...kaller.appspotmail.com Cc: Kurt Manucredo <fuzzybritches0@...il.com>, keescook@...omium.org, yhs@...com, dvyukov@...gle.com, andrii@...nel.org, ast@...nel.org, bpf@...r.kernel.org, daniel@...earbox.net, davem@...emloft.net, hawk@...nel.org, john.fastabend@...il.com, kafai@...com, kpsingh@...nel.org, kuba@...nel.org, linux-kernel@...r.kernel.org, netdev@...r.kernel.org, songliubraving@...com, syzkaller-bugs@...glegroups.com, nathan@...nel.org, ndesaulniers@...gle.com, clang-built-linux@...glegroups.com, kernel-hardening@...ts.openwall.com, kasan-dev@...glegroups.com Subject: [PATCH v5] bpf: core: fix shift-out-of-bounds in ___bpf_prog_run Syzbot detects a shift-out-of-bounds in ___bpf_prog_run() kernel/bpf/core.c:1414:2. The shift-out-of-bounds happens when we have BPF_X. This means we have to go the same way we go when we want to avoid a divide-by-zero. We do it in do_misc_fixups(). When we have BPF_K we find divide-by-zero and shift-out-of-bounds guards next each other in check_alu_op(). It seems only logical to me that the same should be true for BPF_X in do_misc_fixups() since it is there where I found the divide-by-zero guard. Or is there a reason I'm not aware of, that dictates that the checks should be in adjust_scalar_min_max_vals(), as they are now? This patch was tested by syzbot. Reported-and-tested-by: syzbot+bed360704c521841c85d@...kaller.appspotmail.com Signed-off-by: Kurt Manucredo <fuzzybritches0@...il.com> --- https://syzkaller.appspot.com/bug?id=edb51be4c9a320186328893287bb30d5eed09231 Changelog: ---------- v5 - Fix shift-out-of-bounds in do_misc_fixups(). v4 - Fix shift-out-of-bounds in adjust_scalar_min_max_vals. Fix commit message. v3 - Make it clearer what the fix is for. v2 - Fix shift-out-of-bounds in ___bpf_prog_run() by adding boundary check in check_alu_op() in verifier.c. v1 - Fix shift-out-of-bounds in ___bpf_prog_run() by adding boundary check in ___bpf_prog_run(). thanks kind regards Kurt kernel/bpf/verifier.c | 53 +++++++++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 22 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 94ba5163d4c5..83c7c1ccaf26 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7496,7 +7496,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, u64 umin_val, umax_val; s32 s32_min_val, s32_max_val; u32 u32_min_val, u32_max_val; - u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32; bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64); int ret; @@ -7592,39 +7591,18 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, scalar_min_max_xor(dst_reg, &src_reg); break; case BPF_LSH: - if (umax_val >= insn_bitness) { - /* Shifts greater than 31 or 63 are undefined. - * This includes shifts by a negative number. - */ - mark_reg_unknown(env, regs, insn->dst_reg); - break; - } if (alu32) scalar32_min_max_lsh(dst_reg, &src_reg); else scalar_min_max_lsh(dst_reg, &src_reg); break; case BPF_RSH: - if (umax_val >= insn_bitness) { - /* Shifts greater than 31 or 63 are undefined. - * This includes shifts by a negative number. - */ - mark_reg_unknown(env, regs, insn->dst_reg); - break; - } if (alu32) scalar32_min_max_rsh(dst_reg, &src_reg); else scalar_min_max_rsh(dst_reg, &src_reg); break; case BPF_ARSH: - if (umax_val >= insn_bitness) { - /* Shifts greater than 31 or 63 are undefined. - * This includes shifts by a negative number. - */ - mark_reg_unknown(env, regs, insn->dst_reg); - break; - } if (alu32) scalar32_min_max_arsh(dst_reg, &src_reg); else @@ -12353,6 +12331,37 @@ static int do_misc_fixups(struct bpf_verifier_env *env) continue; } + /* Make shift-out-of-bounds exceptions impossible. */ + if (insn->code == (BPF_ALU64 | BPF_LSH | BPF_X) || + insn->code == (BPF_ALU64 | BPF_RSH | BPF_X) || + insn->code == (BPF_ALU64 | BPF_ARSH | BPF_X) || + insn->code == (BPF_ALU | BPF_LSH | BPF_X) || + insn->code == (BPF_ALU | BPF_RSH | BPF_X) || + insn->code == (BPF_ALU | BPF_ARSH | BPF_X)) { + bool is64 = BPF_CLASS(insn->code) == BPF_ALU64; + u8 insn_bitness = is64 ? 64 : 32; + struct bpf_insn chk_and_shift[] = { + /* [R,W]x shift >= 32||64 -> 0 */ + BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) | + BPF_JLT | BPF_K, insn->src_reg, + insn_bitness, 2, 0), + BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg), + BPF_JMP_IMM(BPF_JA, 0, 0, 1), + *insn, + }; + + cnt = ARRAY_SIZE(chk_and_shift); + + new_prog = bpf_patch_insn_data(env, i + delta, chk_and_shift, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + continue; + } + /* Implement LD_ABS and LD_IND with a rewrite, if supported by the program type. */ if (BPF_CLASS(insn->code) == BPF_LD && (BPF_MODE(insn->code) == BPF_ABS || -- 2.30.2
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.