Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFnh-mfmsFLbjqJ4ORjq6Ov+dpk2uwJSZzyKiL8yFmmG4biVBw@mail.gmail.com>
Date: Thu, 1 Jun 2017 10:32:37 -0500
From: Alex Crichton <acrichton@...illa.com>
To: musl@...ts.openwall.com
Subject: Use-after-free in __unlock

Hello! I personally work on the rust-lang/rust compiler [1] and one of the
platforms we run CI for is x86_64 Linux with musl as a libc. We've got a
longstanding issue [2] of spurious segfaults in musl binaries on our CI, and
one of our contributors managed to get a stack trace and I think we've tracked
down the bug!

I believe there's a use-after-free in the `__unlock` function when used with
threading in musl (still present on master as far as I can tell). The source of
the unlock function looks like:

    void __unlock(volatile int *l)
    {
   if (l[0]) {
   a_store(l, 0);
   if (l[1]) __wake(l, 1, 1);
   }
    }

The problem I believe I'm seeing is that after `a_store` finishes, the memory
behind the lock, `l`, is deallocated. This means that the later access of
`l[1]` causes a use after free, and I believe the spurious segfaults we're
seeing on our CI. The reproduction we've got is the sequence of events:

* Thread A starts thread B
* Thread A calls `pthread_detach` on the return value of `pthread_create` for
  thread B.
* The implementation of `pthread_detach` does its business and eventually calls
  `__unlock(t->exitlock)`.
* Meanwhile, thread B exits.
* Thread B sees that `t->exitlock` is unlocked and deallocates the `pthread_t`
  memory.
* Thread a comes back to access `l[1]` (what I think is `t->exitlock[1]` and
  segfaults as this memory has been freed.

I was trying to get a good reliable reproduction but it ended up being
unfortunately difficult! I was unable to reproduce easily with an unmodified
musl library, but with a small tweak I got it pretty deterministic. If you add
a call to `sched_yield()` after the `a_store` in the `__unlock` function
(basically just manually introduce some thread yielding) then the following
program will segfault pretty quickly:

    #include <assert.h>
    #include <pthread.h>

    void *child(void *arg) {
      sched_yield();
      return arg;
    }

    int main() {
      while (1) {
        pthread_t mychild;
        assert(pthread_create(&mychild, NULL, child, NULL) == 0);
        assert(pthread_detach(mychild) == 0);
      }
    }

I compiled this all locally by running:

    $ git clone git://git.musl-libc.org/musl
    $ cd musl
    $ git rev-parse HEAD
    179766aa2ef06df854bc1d9616bf6f00ce49b7f9
    $ CFLAGS='-g' ./configure --prefix=$HOME/musl-tmp
    # edit `src/thread/__lock.c` to have a new call to `sched_yield`
    $ make -j10
    $ make install
    $ $HOME/musl-tmp/bin/musl-gcc foo.c -static
    $ ./a.out
    zsh: segmentation fault  ./a.out
    $ gdb ./a.out ./core*
    Core was generated by `./a.out'.
    Program terminated with signal SIGSEGV, Segmentation fault.
    #0  0x000000000040207e in __unlock (l=0x7f2d297f8fc0) at
src/thread/__lock.c:15
    15                      if (l[1]) __wake(l, 1, 1);
    (gdb) disas
    Dump of assembler code for function __unlock:
       0x000000000040205c <+0>:     mov    (%rdi),%eax
       0x000000000040205e <+2>:     test   %eax,%eax
       0x0000000000402060 <+4>:     je     0x4020ac <__unlock+80>
       0x0000000000402062 <+6>:     sub    $0x18,%rsp
       0x0000000000402066 <+10>:    xor    %eax,%eax
       0x0000000000402068 <+12>:    mov    %eax,(%rdi)
       0x000000000040206a <+14>:    lock orl $0x0,(%rsp)
       0x000000000040206f <+19>:    mov    %rdi,0x8(%rsp)
       0x0000000000402074 <+24>:    callq  0x4004f5 <sched_yield>
       0x0000000000402079 <+29>:    mov    0x8(%rsp),%rdi
    => 0x000000000040207e <+34>:    mov    0x4(%rdi),%eax
       0x0000000000402081 <+37>:    test   %eax,%eax
       0x0000000000402083 <+39>:    je     0x4020a8 <__unlock+76>
       0x0000000000402085 <+41>:    mov    $0xca,%r8d
       0x000000000040208b <+47>:    mov    $0x1,%edx
       0x0000000000402090 <+52>:    mov    $0x81,%esi
       0x0000000000402095 <+57>:    mov    %r8,%rax
       0x0000000000402098 <+60>:    syscall
       0x000000000040209a <+62>:    cmp    $0xffffffffffffffda,%rax
       0x000000000040209e <+66>:    jne    0x4020a8 <__unlock+76>
       0x00000000004020a0 <+68>:    mov    %r8,%rax
       0x00000000004020a3 <+71>:    mov    %rdx,%rsi
       0x00000000004020a6 <+74>:    syscall
       0x00000000004020a8 <+76>:    add    $0x18,%rsp
       0x00000000004020ac <+80>:    retq
    End of assembler dump.

The segfaulting instruction is the load of `l[1]` which I believe was
deallocated by thread B when it was exiting.

I didn't really see a great fix myself, unfortunately, but assistance with this
would be much appreciated! If any more information is needed as well I'm more
than willing to keep investigtaing locally and provide information, just let me
know!

[1]: https://github.com/rust-lang/rust
[2]: https://github.com/rust-lang/rust/issues/38618

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.