Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1501520360.0.593167188853569@go.bunnymail.go>
Date: Mon, 31 Jul 2017 22:06:51 +0200
From: felix.winkelmann@...uta.com
To: musl@...ts.openwall.com
Cc: peter@...e-magic.net
Subject: possible bug in setjmp implementation for ppc64

Hi!

I think I may have come across a bug in musl on PPC64(le), and the folks
on the #musl IRC channel directed me here. I'm not totally sure whether
the problem is caused by a my misunderstanding of C library functions or whether
it is a plain bug in the musl implementation of setjmp(3).

In out project[1] we use setjmp to establish a global trampoline
and allocate small objects on the stack using alloca (see [2] for
more information about the compiliation strategy used). I was able to reduce
the code that crashes to the following:

---
#include <stdio.h>
#include <alloca.h>
#include <setjmp.h>
#include <string.h>
#include <stdlib.h>

jmp_buf jb;

int foo = 99;
int c = 0;

void bar()
{
  c++;
  longjmp(jb, 1);
}

int main()
{
  setjmp(jb);
  char *p = alloca(256);
  memset(p, 0, 256);
  printf("%d\n", foo);
  
  if(c < 10) bar();

  exit(0);
}
---

When executing the longjmp, the code that restores $r2 (TOC) after the call
to setjmp reads invalid data, because the memset apparently clobbered
the stack frame - i.e. the pointer returned be alloca points into a part
of the stack frame that is still in use.

I tried this on arm, x86_64 and ppc64 with glibc and it seems to work fine,
but crashes when linked with musl (running Alpine Linux on a VM)

If you need more information, please feel free to ask. You can also keep
me CC'd, since I'd be interested in knowing more about the details.


felix

[1] http://www.call-cc.org
[2] http://home.pipeline.com/~hbaker1/CheneyMTA.html

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.