Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <af991a1fb36a4e43d397bfe1508cde58@smtp.hushmail.com>
Date: Wed, 26 Aug 2015 10:51:43 +0200
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: LWS and GWS auto-tuning

On 2015-08-26 08:59, Solar Designer wrote:
> On Tue, Aug 25, 2015 at 08:36:44PM +0200, magnum wrote:
>> Worst/best 10 for Tahiti (oldoffice failing):
>
> Thanks!  What code version are these benchmarks for?  I ask because some
> of the cleanups you made after committing my patch are not no-ops.

Yes, these tests were made prior to my clean-up.

> Specifically, commit 244d113dce38fcd1ead0f6abf0557863844313b2 with
> comment "OpenCL autotune: Drop obsolete functions get_task_max_size()
> and get_default_workgroup()." appears to change what LWS is used during
> the first GWS auto-tuning run, which also changes the GWS soft-limit for
> that run due to how I am calculating it:

For some formats, it was changed. For ones that just returned 0, it 
didn't change. Looking at the clean-up commit, md5crypt had it as

-static size_t get_default_workgroup()
-{
-       if (cpu(device_info[gpu_id]))
-               return get_platform_vendor_id(platform_id) == DEV_INTEL ?
-                       8 : 1;
-       else
-               return 64;
-}
-

So for GPU, it started with 64. After 244d113, all formats default to 0 
(as in running with a NULL LWS, which is special). It's 
opencl_autotune.h:119:

-               local_work_size = get_default_workgroup();
+               local_work_size = 0;

We could change it to a device or kernel query, or we could actually 
change this to a hard-coded version of what md5crypt had (as above), 
which should work fine for the absolute majority of formats.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.