Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130725021409.GD10917@openwall.com>
Date: Thu, 25 Jul 2013 06:14:09 +0400
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Parallella: bcrypt (was: Katja's weekly report #6)

Katja, Yaniv -

On Wed, Jul 24, 2013 at 07:18:23PM +0200, Katja Malvoni wrote:
> On Wed, Jul 24, 2013 at 2:28 AM, Solar Designer <solar@...nwall.com> wrote:
> > My concern is that, if I understood Yaniv correctly, the data transfers
> > to/from Epiphany do not have to complete in-order, yet your code relies
> > on the flags being seen strictly before/after the data.
> 
> That's true. But If I do some sort of check by host that the new data was
> written and than set start flag, than it must be busy wait and that will
> make it slower. Any other approach will have the same problem.

I'm afraid I don't understand what you're comparing here.

Yaniv - what would be a safe approach to start computation on an
Epiphany core only when all (new) data is available to the core, and to
read its results only when all (new) results are available to the host?

> So I replaced for loop used to send data to cores with this:
> 
>     core_start = 16;
>     for(i = 0; i < platform.rows*platform.cols; i++)
>     {
>         ERR(e_write(&emem, 0, 0, offsetof(data, setting[i]), &saved_salt,
> sizeof(BF_salt)), "Writing salt to shared memory failed!\n");
>         while(test_salt.salt[0] != saved_salt.salt[0])
>             ERR(e_read(&emem, 0, 0, offsetof(data, setting[i]), &test_salt,
> sizeof(BF_salt)), "Writing salt to shared memory failed!\n");
>         ERR(e_write(&emem, 0, 0, offsetof(data, init_key[i]),
> &BF_init_key[i], sizeof(BF_key)), "Writing key to shared memory failed!\n");
>         while(test[0] != BF_init_key[i][0])
>             ERR(e_read(&emem, 0, 0, offsetof(data, init_key[i]), &test,
> sizeof(BF_key)), "Checking key failed!\n");
>         ERR(e_write(&emem, 0, 0, offsetof(data, exp_key[i]),
> &BF_exp_key[i], sizeof(BF_key)), "Writing key to shared memory failed!\n");
>         while(test[0] != BF_exp_key[i][0])
>             ERR(e_read(&emem, 0, 0, offsetof(data, exp_key[i]), &test,
> sizeof(BF_key)), "Checking key failed!\n");
>         ERR(e_write(&emem, 0, 0, offsetof(data, start[i]), &core_start,
> sizeof(core_start)), "Writing start failed!\n");
>     }
> 
> And now I'm getting 930 c/s. This should be safer because data is there
> before writing start. But ideally, all array members should be tested, not
> just the first one

Right.

> and that'll make it even slower.

Aren't you reading back all array elements anyway?  Merely testing them
should be relatively fast.  I wouldn't expect a measurable performance
difference from that.

> For reading, I have to remember previous result to be able to check that
> new result is different. But this would stuck in case that one core
> computes same hash as in previous iteration (what is the probability of
> this event?).

This can in fact happen in normal JtR usage, although we try to avoid
such redundancy.  If your code relies on this never happening, then your
code should also ensure that you never compute a hash for the same
password twice, or alternatively you may bypass this safety check if the
previous candidate password matches the current one (for this core).

> If I don't do that, I'm again relying on handshaking and that
> requires deterministic order of execution.
> 
> Yaniv, in Epiphany Architecture Reference 4.13.01.04, p. 20, Table 1 - is
> this order guaranteed only for instructions executed by Epiphany cores? Are
> reads and writes to/from core local memory executed by host also non
> deterministic? On p.19 of same pdf: "To ensure that these effects do not
> occur in code that requires strong ordering of load and store operations,
> use run-time synchronization calls with order-dependent memory sequences."
> - where can I find more about those calls? I wasn't able to find them in
> SDK reference (at least not under that name).

Yaniv - I second Katja's request.  You're the expert. :-)

I think that what we need are memory barriers.

Thanks,

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.