|
Message-ID: <CAKGDhHWHqkbsmE=48otsH5=mZX431R2gVDo86kyFbHpyP==K_A@mail.gmail.com> Date: Sat, 6 Jun 2015 14:33:55 +0200 From: Agnieszka Bielec <bielecagnieszka8@...il.com> To: john-dev@...ts.openwall.com Subject: Re: PHC: Lyra2 on CPU 2015-06-06 14:09 GMT+02:00 Solar Designer <solar@...nwall.com>: > On Sat, Jun 06, 2015 at 12:41:10PM +0200, Agnieszka Bielec wrote: >> but it looks like omp has problems with barriers or there is something >> I don't know . I wrote simple program in C > > In this program, your threads share an stdio buffer for stdout. That's > risky. You're lucky stdout is line-buffered when output is to a tty, so > the buffer is flushed after each printf(), but still you have your > threads work on this shared data structure. > > I think it's wrong for you to use this to draw conclusions about OpenMP > barriers, given that your debugging output itself may be out of order. > > You could want to try write(2) (syscall, no buffering in userspace) > instead of printf(3) (libc function, buffering in userspace). For > "production use", write(2) has its own drawbacks (may return after > sending only partial output, so needs to be put in a loop), but for your > current experiment it's a fine choice. #include <stdio.h> #include <omp.h> static void func() { char t[30]; sprintf(t,"%d %d\n",omp_get_num_threads(),omp_get_thread_num()); write(0,t,10); write(0,"checkpoint 1\n",13); #pragma omp barrier write(0,"checkpoint 2\n",13); #pragma omp barrier write(0,"checkpoint 3\n",13); #pragma omp barrier write(0,"checkpoint 4\n",13); } int main() { int i; #pragma omp parallel for for(i=0;i<2;i++) { func(); } } output: none@...e ~/Desktop $ ./omp 8 0 checkpoint 1 8 1 checkpoint 1 checkpoint 2 checkpoint 2 [blocks]
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.