Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150606124454.GA25282@openwall.com>
Date: Sat, 6 Jun 2015 15:44:54 +0300
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Re: PHC: Lyra2 on CPU

On Sat, Jun 06, 2015 at 02:33:55PM +0200, Agnieszka Bielec wrote:
> #include <stdio.h>
> #include <omp.h>
> 
> static void func()
> {
>     char t[30];
>     sprintf(t,"%d %d\n",omp_get_num_threads(),omp_get_thread_num());
>     write(0,t,10);
>     write(0,"checkpoint 1\n",13);
>     #pragma omp barrier
>     write(0,"checkpoint 2\n",13);
>     #pragma omp barrier
>     write(0,"checkpoint 3\n",13);
>     #pragma omp barrier
>     write(0,"checkpoint 4\n",13);
> }

stdout is fd 1, not 0.  It's curious this produced output at all.

Also, you're printing some garbage from the stack in the first write().
It should be:

    write(1,t,strlen(t));

Other than that, your program works for me (doesn't block).

> output:
> none@...e ~/Desktop $ ./omp
> 8 0
> checkpoint 1
> 8 1
> checkpoint 1
> checkpoint 2
> checkpoint 2
> [blocks]

This is as expected, except for it blocking, right?  Will it still block
for you if you correct the errors above?

BTW, note that:

#pragma omp parallel for
    for(i=0;i<2;i++)

isn't guaranteed to actually spawn two threads for the two iterations.
It probably will (as long as OMP_NUM_THREADS is at least 2, or is not
set and the system has at least two logical CPUs), but it is also
allowed to just execute the two iterations sequentially.  Observe:

$ ./b
2 1
checkpoint 1
2 0
checkpoint 1
checkpoint 2
checkpoint 2
checkpoint 3
checkpoint 3
checkpoint 4
checkpoint 4
$ OMP_NUM_THREADS=1 ./b
1 0
checkpoint 1
checkpoint 2
checkpoint 3
checkpoint 4
1 0
checkpoint 1
checkpoint 2
checkpoint 3
checkpoint 4

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.